Meta said the contracting "did not meet (meta's) standards". I am sure that is true. meta's "standard" is not to reveal the illegal, immoral, unethical things meta does. No matter what the harm.
Maybe a company with those standards should not get our business. Oops, no wait, maybe they mean the Friedman Doctrine standards? In that case they are entitled to do any and every thing to make a profit. No matter what the harm.
I used to work for Meta. I quit largely because of intense frustrations with the company. Meta has made a lot of mistakes, overlooked a lot of harms, and made a lot of short-sighted, selfish choices. Many things about the world are worse than they could be because of choices Meta has made.
So that when I say that they really do have a zero tolerance policy for anyone using their internal systems to violate user privacy, it's not because I'm eager to defend them. It's just true (at least, it was when I was there). There are internal systems dedicated to making sure you have access to what you need to do your job, and absolutely nothing else. All content you interact with through internal tools is monitored and logged. If you get caught trying to use whatever access your job gives you for anything other than doing your job, security immediately escorts you out of the building. This is drilled into new hires early and often. For everything Meta gets wrong, they really do take this seriously.
Yea but no. Meta is a defense contractor that hires out to 3rd parties exactly to do this. so you guys don't get to do that, but a lot of other people are. I hope that helped you sleep at night while you were there. But yea, it all gets bought and sold at the end of the day.
The irony is meta wants to implement verification to protect kids. Meanwhile it's doing everything it can to exploit them most at every single level for profit and for the love of the game. Billions of dollars, the world's most advanced computers all dedicated for it
> At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI.
They just got fired for "piercing the veil". They committed the sin of bringing attention to the invasion of privacy.
Unfortunately in today’s world where organizations are larger than many a country’s GDP, they really only have to face responsibility towards shareholders and maximizing profits is the thing they usually care about.
That's not what the Friedman Doctrine is, technically. It is that management should obey moral, ethical, and legal frameworks in the operation of the business for the benefit of its investors; and specifically NOT take actions which are outside of that narrow scope.
Is it illegal or immoral? Having Meta review this material has to be approved by users and has their consent.
There was an example in the article where a user’s glasses kept recording the user’s wife after he took them off. That’s bad but on the user, not Facebook.
Seems similar to a situation where someone takes nudes of someone without their consent and then sends them off to a lab to be printed. The lab isn’t doing anything illegal or unethical printing them when they ask the user “are these legal” and the user replies “yes.” Unless you want to stop photo printers from ever printing nudes, I think the responsibility is on the user, not the firm.
Meta cancels the contract with the outsourcing company they contracted to classify smart glasses content after employees at the company whistleblow about serious privacy issues with the content they were paid to classify.
How else do you want companies to remove and prevent CSAM? It seems like you must have some human involvement to train and monitor.
It’s a terrible job, I wouldn’t want to do it, but someone needs to. Perhaps one day, AI will be accurate enough to not need it, but even then you need someone to process complaints and waivers (like someone’s home photos being inaccurately flagged).
> How else do you want companies to remove and prevent CSAM?
Different situation.
Facebook has to do CSAM because it's a publishing platform. People will post CSAM on facebook, so they must do moderation.
And "just don't have facebook" isn't a solution because every publication of any sort has to deal with this problem; Any newspaper accepting mail has this problem. (Albeit to a much more scaled down version) People were nailing obscene things to bulletin boards for all recorded history.
---
In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.
The downside would be "worse LLMs" or "LLMs being created later", which is a perfectly acceptable compromise.
---
This is not to say that genuine content flagging firms have no reason to curate such data & build tools to automatically flag content before human moderators have to.
But OpenAI is not such a firm. It's a general AI company.
CSAM exists on social media because they are so large that it's not possible to moderate them effectively. To me this is a a no-go. If a business is so large that it cannot respect laws, it needs to be shut down.
The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
Also, if you've gone from zero to one of the biggest coroporations in the country, and have billions to throw at the 'metaverse', I find it hard to believe that removing CSAM is where you struggle.
Yep. If you cannot both safely and legally provide the thing you are selling you are no longer a legitimate company you are a criminal enterprise profiting off of exploitation.
Isn't it more that tech companies are just more high profile and integral to political and social landscape than older companies; but reviewing the current political zeitgeist, they're in lockstep to what some, if not all, would just call fascism?
They are literal defense and offense contractors. They hang out at the Pentagon. They sell political data to sway elections. They give gifts to leaders for favors. It is technofacism.
Safety and user pain is a part of tech which seems largely ignored, even on sites like HN.
I really have no idea why this ignorance prevails; commenters seem to genuinely be unaware of what goes on in Trust and Safety processes.
I mean, most users would complain about content moderation, but their experience would be miles ahead of what most of humanity enjoys when it comes to responsiveness.
I believe this lack of knowledge, examples, and case history is causing a blind spot in tech centric conversations when it comes to the causes of the Techlash.
Unfortunately this backlash is also the perfect cover for authoritarian government action - they come across as responsive to voters while also reigning in firms that are more responsive to American citizens and government officers than their own.
Sounds about right. If you know someone who uses these smart glasses, it's important not to tolerate them whatsoever. Don't speak with them, interact with them. I wouldn't even recommend being in their presence.
> It is not up to you to deprive anyone their right to use them.
I don't see anyone saying that people don't have the right to use them. I see people saying that they have the right to avoid being anywhere near the people who use them and to disapprove of those people. Which is just as much of a right as the right to wear spy glasses.
It is unfortunate that a large number of users here are not hackers, not even in an idealistic philosophical sense, and will betray the public good for their own short-term gain.
I'm glad to see opinion seems to be swaying back in this direction. It was only a few months ago that the general sentiment seemed to be "times are different than the glasshole days, it's fine now."
>I don't think that's fair. Smartglasses have legitimate purposes.
I think that's true in principle, but in practice there are going to be two kinds of smart glasses users; extraordinarily annoying kids or you adults acting annoying in public so they can post videos to social media, and then normal people who have no clear sense for how much they're violating the privacy of those around them, and just like cool tech.
Very, very few users are going to be an interesting or valid use case -- eg: someone who is using them to assist with a disability, or for research, or something.
Even most dash cams don't stream to Meta -- they just record the last _n_ hours and you need to know to save off the video if you're in an crash / incident. In other words, most of the time no privacy is violated, and the only potential privacy violation occurs during an incident.
Even policy body cams, which I wholeheartedly support, have some pretty strong downsides: currently, if you're at the end of your rope, having the worst day of your life, and in your dishevelment turn a speeding ticket into a BATLEO, you're famous forever for being a lunatic. Maybe the rest of the time you're a good person, and you can learn from this and move on. Except now you have a permanent albatross around your neck. This is a secondary penalty that the justice system did not intend, and has no answer for.
I saw there is at least one company working on offline smart glasses for disabled users. I don’t have such a problem with this, and I wonder if the industry as a whole could be nudged in this direction. Offline glasses seem more ok to me.
It makes a lot of sense for actual accessibility devices to be offline-capable. You don’t want to lose your “sight” when you step into a metal building or elevator.
> Very, very few users are going to be an interesting or valid use case
You then list a mere two categories.
Would your argument have been similar in 2008 if told that in ten years, everyone in the economic first world would be carrying multiple cameras including a dedicated "selfie" camera at all times?
You say that like it's assumed that ubiquitous smart phones were obviously a good thing, when it sure seems like there's an increasing number of people questioning that assumption.
I'm not sure I understand the point about a dedicate "selfie" camera, however I think we're conflating "percentage of users" with "varieties of use cases." I think there could be quite a cornucopia of potential use cases, but I think per capita most people will not actually be making use of these. As other commenters have pointed out, I'd be a lot more tolerant if the data were not constantly piped to Meta.
The point about a dedicated selfie camera was that in 2008, few would have considered taking selfies to be a major use case that would drive >90% of teens and adults to have a camera which has no other reasonable purpose. In the age of FaceTime calls, it would seem absurd to question why it's needed, but nothing like that was mainstream in 2008, which would lead to the same argument of "there are very few legitimate reasons to want such a camera (and it will enable creepshots)".
My wider point is that there are already many obvious use cases, and as adoption of cameras which are always on or plausibly always on rises, there will be a lot more, including augmented reality, translation, context hinting, AI agent awareness for assistants and personal security, and at least dozens of others, some of which I am sure no one has started building for, yet.
Meta is probably not the winner in this space (or, I hope not, at least, so we agree there!). However, the idea that people have a right to remember and process what they see and hear in full fidelity is pretty basic, in my opinion.
Thanks for clarifying, I appreciate it. I'm so burnt by the potential downsides (and by the last ~19 years of smartphones) that I don't think we can see eye to eye, but I really appreciate you taking the time to expand on your point so I could understand your perspective.
I can't deprive someone of their right to use them, but I can refuse to interact with someone who's wearing them. This seems like a fair natural consequence. Feel free to wear them, but I won't speak to you when you do.
dash cams are local and pointing at the road, not everywhere.
body cams are local and mostly used by law enforcement to guarantee they are not abusing their power.
glassholes are connected to the cloud. you may have the right to record on public space, i have the right to remain anonymous in the crowd and not be constatly targeted by an advertisement company.
Even if 1% of the corner cases are legit uses (blind people having the glasses describe the world around them is fantastic.) 99% of the people using them are assholes that deserve to be put in the ground and the glasses smashed.
I am blind, and I could imagine several usecases which would make my life a lot easier by using glasses like this. But because of their reputation I will most likely never use them, and especially not in public. I'm already afraid enough people will think I'm recording them when I use my phone to get info about what's around me, definitely don't need to get punched in the face for wearing meta on my face.
Edit: Not that I would want Meta to get all that data anyway. But even if glasses exist which are more privacy conscious, I think Meta and Google Glass thoroughly ruined the reputation of any kind of wearable like this.
I can imagine there are many use-cases for blind people, but I also think having some kind of visual indicator that "these glasses are recording" would be good, and I don't know what tools you use in public at the moment, but if you use, for example, a white cane, it might help people to understand "this person is using a camera for assistance". But yes, the fact that glasses manufacturers have already demonstrated they want to take every frame of data they can does sour their reputation
I seem to recall that when the snapchat glasses were a thing, they had a very bright an obvious ring of LEDs around the camera itself, that were bright enough to shine through a sticker placed over them. Sure, there are still ways to defeat that, but it makes it a bit harder.
Also I just googled for what the light actually looks like when it's recording, and it's not even really that visible...
I'm sorry you are dealing with the social repercussions of assistive technology. I really wish companies weren't so gross and that they did not endanger some of the advantages of advances like this by being gross
I have 2 kids in single digit ages (1 under 5). I bought meta gen 2 last month and I cannot describe how many sweet moments I have captured. My kid loves to sing while playing with dolls and stops as soon as I flip my phone out to record.
I hope you can appreciate that you're capturing this data for Meta and their contractors and that they have the capability of doing whatever they want with this data. My spouse and I ask everyone taking pictures of our kid to never post them to social media because Meta et. al. create a shadow profile using those pictures, and they can share those photos with contractors and with other people and we don't want a company like that to have my son's data without his 18-year-old self's consent.
I get this argument and largely agree with it in regards to these meta glasses. Its why I don't currently use them.
But I'd like to have some smart glasses that do respect my privacy and offer this kind of functionality. Honestly, most of the things smart glasses do today are stuff I'd really like. Having my glasses just be the bone conduction headphones I often wear anyways? Check. Easy access to taking photos and short videos of life experiences? Love it. Integrated into the thing I'm often wearing on my head anyways? Perfect.
If the "subject" is human, those seem rather few. Surgeries come to mind, though smart glasses would be more a convenience there. Maybe some psychiatric patients, where a doctor wants to review snippets of his interactions with lower-level staff or his family members? Law enforcement trying to record interactions between informants and targeted criminals - though the latter might wise up pretty quick. Security staff at some very-high-security facilities.
I already noted it in the answer. If a person feels at risk, or even if they're on vacation, they have a right to record something/everything and someone/everyone around them in public, just as they could with a phone.
Do you think you will know if someone has their phone in their pocket or in a holster, and is turned on and recording? You will never know.
There are dozens if not hundreds of cameras pointed at the street that record people every time they go out in public in any urban setting.
If someone is recording you on video with a smartphone, you are generally aware of it, because it has to be pointed at you. Sure, you have a right to record people in public, there is no reasonable expectation of privacy in a public place, but I would quite like to know if you are recording me. I'm also not terribly worried about people recording me having sex or being naked in public without my knowledge...
> Do you think you will know if someone has their phone in their pocket or in a holster, and is turned on and recording? You will never know.
At least this says something about the intention. Someone who films with a hidden phone implicitly shows that they intentionally hid this from the people being filmed.
Filming with glasses is hidden by design. It gives plausible deniability to the person filming, so they can film covertly but pretend they weren't hiding anything.
In most cases this doesn't make a difference but there are some cases where the premeditation can make it worse for the person doing the "abusive" filming.
>> even if they're on vacation, they have a right to record something/everything and someone/everyone around them in public
Big assumption here that the place you're on vacation doesn't have different laws. You may have absolutely no right to record "everything and everyone" around you.
If you walk up to me and shove a camera in my face I'll get very loud and very angry with you very quickly. That's kind of paradoxical, if you intended the camera to make you feel safer. I don't think I'm in the minority.
I do not want my employees recording their day job and selling it, or the creepy dude next to me in the bathroom filming my goods or the log jam flying out of my butt so meta can try to sell me pepto.
I also don't want that one time I did something minor illegal like jay walking get auto fed into palantir so they can ship me to the latest internment camp.
Or someone stealing my biometrics by just walking past me.
> Smartglasses have reasonabl eand legitimate uses. People also use bodycams that record continuously, such as for legal reasons. People have a right to record in public, such as if they feel at risk. Are you going to go after car cameras next?
None of those default to sharing your recording with anyone else, let alone with no practical way to opt out.
I know bunch of people who use smart glasses. And use RayBan meta glasses myself for two years (mostly as speakers/mic, but occasionally can use camera as well for some random shots – like cycling in a forest at a beautiful sunset). My default assumption for many years that if any photo/video goes to cloud, it potentially can be leaked/stolen/used. I keep this assumption both for smartphones and smartglasses, yet would be happy to switch to Apple glasses finally when they're out.
Calls to stop speaking or interacting with people who use smart glasses sounds like the dumbest thing I've read on HN ever.
You're aware of the privacy implications but think people talking about avoiding people who use them are proposing dumb arguments? I don't follow your logic.
There's also nothing stopping us from stigmatizing the use of smartphones in public. Even a slight discouragement of it would be progress. It doesn't have to be all or nothing.
Because person wearing glasses usually can move and video surveillance cameras usually can't?
If that's not it then spell it out for me, please.
Also, why would i be deceptive in this discussion? I feel like I missed some ideological conflict.
Imagine someone pulling up a smartphone and then recording everything that happens around them. Contrast that with someone wearing smart glasses and doing that exact same thing.
On a separate note, (and this is a genuine question) are you by any chance aware the term Non-consensual intimate imagery / NCII?
I am beginning to suspect that the average HN goer isn’t aware of the scope and scale of the Trust and Safety problem.
Most people don't run around holding out their smartphone directly in front of them. It has to be pointed at the subject, and tends to be obvious.
Smart glasses, however, are always aimed at whatever the wearer is looking at. They may or may not be recording (note the reports of people hiding the LED indicators), and at a fair distance could easily be mistaken for a normal pair.
The general populace is much more likely to notice the former recording rather than the latter.
I've seen people keep their phone in their shirt pocket. The only reason it tends to be obvious is that most people aren't trying to be covert. Those aren't the ones you should be worried about.
At everything on the opposite side of the screen, typically. There is a recording light for Meta glasses, but not one for iPhones, for example: the "recording" indicators are all user-side there.
When I'm on public transport, people generally face their phones in such a way that they'd only be filming your feet or the floor... They don't hold them up at head height in such a way that other people would be recorded. Maybe it's just a cultural thing
A mostly-solitary sporting event (or one where you know all the other participants and can get their consent to record beforehand) seems like a reasonable use of these sorts of glasses. I wouldn’t personally give consent just as a sort of privacy reflex, but it really depends on your social circle.
People recognize GoPro cameras for what they are. They are easily understood as a camera. Glasshole devices are not as easily recognizable and people honestly may not realize they are being recorded especially when the glasshole does not inform everyone they are being recorded.
Now, for your "while cycling" qualifier, why does it matter? Again, if you stop to talk to people while recording and it is not obvious you are recording, you're a glasshole. Personally, I have no experience with camera quality from the devices, but I do know what a GoPro can do. My gut instinct is that the GoPro will be superior footage.
A Kenyan workers' organisation alleges Meta's decision was caused by the staff speaking out.
Meta says it's because Sama did not meet its standards, a criticism Sama rejects ...
Well, yeah. If I went straight to the press to trash the reputation of my client's product, rather than communicating internally first to help them proactively address the issues, I would expect to get fired.
Not that I am remotely interested in defending Meta, or optimistic that they would proactively address privacy issues. But I don't feel that sympathetic to the outsourcing company here either.
I don't know what happened behind the scenes. I'm just going off what is said and not said in the article. If I were whistleblowing about something like this, I would take pains to describe what measures I took internally before going public. I didn't see any of that here.
EDIT: Look, to be clear, I think it's bad that naive or uninformed people are buying video recorders from Meta and unintentionally having their private lives intruded on by a company that, based on its history, clearly can't be trusted to be a helpful, transparent partner to customers on privacy. I think it's good that the media is giving people a reminder of this. I think it's good that the sources said something, even though the consequences they suffered seem inevitable. But to me, there is nothing essentially new to be learned here, and I don't know what can or should be done to improve the situation. I think for now, the best thing for people to do is not buy Meta hardware if they have any desire for privacy. Maybe there are laws that could help, but what should be in the laws exactly? It's not obvious to me what would work. I suspect that some of the reason people buy these products is for data capture, and that will sometimes lead to sensitive stuff being recorded. What should the rules be around this and who should decide? Personally I don't know.
What makes you think the outsourcing firm didn't raise these concerns in email or meetings? You think these people wanted to lose jobs and income? That's irrational.
Why reflexively defend a massive tech corporation caught repeatedly violating the law?
There are transgressions severe enough that your duty to stop them is heavier than your responsibility to "the reputation of your client's product." Amazing this needs to be stated, frankly.
What specifically do you mean? It is by design that smart glasses see the things happening in front of their users? Yes, it is. That is why people buy them.
Huh. There you go again, thinking everyone else is an idiot. Capture of video data of users by Meta is never acceptable. It would not be acceptable for any phone, and it is not acceptable for any glass, ever.
Saving the data for any purpose other than allowing users to access it is bad enough; allowing Meta employees or contractors to view personal videos is on a whole new level.
I don't know why people buy smart glasses. Maybe they buy them for video capture. If so, the videos go to Meta's servers and Meta might do things with them. They might be criticized for not reviewing them in certain cases. That's one reason why I wouldn't buy Meta smart glasses.
The main issue here is Facebook employees viewing users' private video streams (including of user nudity) without the users' knowledge.
The secondary issue is that it's generally frowned upon to make your employees view nudity in the workplace. Are there extenuating circumstances here? No, we have no evidence there are any extenuating circumstances here.
> > Meta said this was for the purpose of improving the customer experience, and was a common practice among other companies.
> Am I reading this correctly?! This is probably the weirdest statement I've read on the internet in twenty years.
It's total fantasy. I've worked in big tech. Casually uploading and providing company/contractor access to non-redacted intimate photos or pictures of the insides of people's homes vaguely "for the purpose of improving the customer experience" would not pass even a surface-level privacy or data-protection review anywhere I've ever worked. Do Meta even read what they are saying?
Well you gotta give out black mail material to the scam centers somehow. Otherwise they don't actually have leverage! Oh right... We don't want that happening.
What you should have read correctly was the Facebook terms of service. I still get strange responses when I tell people that I don't use WhatsApp. All Meta's properties are tainted such that I won't use them.
I once read the manual of one of those small floor cleaning robots (Ecovacs Deebot U2 pro), and it basically said that by using it you were giving them a right to take pictures and send them to a remote server (to analyze issues or something like that)
Are you conflating telemetry with literally live-streaming your life to Meta? Because that's what makes the statement weird.
edit 2: OK, I see what you mean. But I'm wondering if it should be possible to consent to this via T&C. Basically the same issue as with many online services, turned up to 11, sure. And it involves OTHER people, who have not consented.
Stuff like this used to be outrage fuel even when it was more of a social experiment, e.g. the documentary "We live in public" or the "Big brother" TV show. By now, I'm sure there have been millions of influencers doing similar things, but it's very much not considered normal?
Streaming to an unknown number of employees might be considered different from streaming to the public, sure.
But the core question here is whether there's informed consent, and, IMO also, if it should be possible to consent to this when the other party is a company like Meta and the pretext is not deliberately seeking attention (like influencers and streamers do).
About the "they asked us to view it and then fired us for it". Having worked in their RL division(I don't work at meta anymore) this story is quite weird for two reasons:
1. Meta AFAIR paid/compensated people — contractors or recruited via ads — to have them submit their data. There are strict privacy protocol and reviews in place to distinguish data use in these cases vs gen public. This is not to say the process is perfect, but if these users are gen public, I would be very shocked.
2. Hiring contractors to submit data is a more controlled environment VS recruitment of gen pub via ads to submit data, but the former has more well understood privacy disclosures than the latter. This means in practice asking contractors to wear glasses and "move around their surroundings naturally and do things" goes well with basically the privacy practice "the data your are submitting we can view and use all of it for purpose X and nothing but X". BUT this framing is with ad based recruited people — which are general users who willingly submit data — is much much harder. My suspicion is they are running ad based recruiting in general public and while those users may have signed a privacy statement it is very surprising that they did not tighten the privacy practices around the use of the data and who has access.
So already, this person wearing these glasses are already agree with that Meta can monitor them. They also probably trust Meta when they say "When the glasses are off, nothing is recording", for better or worse. So with that perspective in mind, it's not far fetched to assume these same people will willingly be naked into front of these recording devices they believe to be off.
Of course, anyone who opened a newspaper in the last 10 years or so would know better, but I can definitively see some people not giving a fuck about it.
One of the bigger commercial niches for smart glasses is filming POV porn, so it is hardly surprising that sort of content ended up in the moderation queue. The project should have planned to account for that use case.
And I do appreciate how awkward it is for Meta to admit that use case exists. Even in the Oculus Go days there were a bunch of polite euphemisms internally to avoid mentioning "our device has to ship with a browser so people can watch porn on it"
This is my question too. I get moderating things that people are posting. Being not familiar with the device and how it works, I'd assume that all footage is posted to the user's cloud account even if not publicly posted. This being cloud storage, Meta is "moderating" the footage to ensure CSAM or other restricted footage type is not being stored on their (Meta's) platform. That's my very generous take on it, not that I believe it
The thing that really gets me is that internally there are 4 levels of data 1 being public domain shit (the sky is blue) up to 4 which is private user data, or something that is sensitive if leaked or shared.
I was told that by default all user data is level 4, as in if you do anything without decent approval, you're insta fired. There are many stories about at least one person a month during boot camp accessing user data and getting escorted out of the building within hours.
The part where I worked, in visual research, we had to jump through a years worth of legal hoops to get permission to record videos in public. We had to build an anonymisation pipeline, bullet proof audit trail, delete as much data as possible, with auto delete if something went wrong.
We had rigid rule about where that data could be stored and _who_ could access it. We were not allowed to share "wild" footage (ie data that might have the hint of anyone who hadn't signed a contract) for annotation because it would be given to a third party. THe public datasets we released all had traceable people, locations all with legal waivers signed.
Then I hear they just started fucking hosing private data to annotators to _train_ on? without any fucking basic controls at all? Just shows that whenever Zuck or monetsization want something, the rules don't apply.
I look forward to that entire industry collapsing in on it's self.
I believe the tricky privacy and security issues around smart glasses (and other "personal" tech) can be navigated successfully enough by a thoughtful, diligent, responsive company.
Which is why I'd never touch a person tech device from Meta.
Their entire DNA is written to exploit their users for profit. In my judgement, they literally cannot and will never consider those issues as anything other than something to obscure to keep people unaware of the depth of the exploitation.
At this scale, this sound like some insider joke contract made up only to make some hustle on the side capitalizing with stock options on the possibility of adhoc news trading bots glitching out on the keyword, here "x.com/sama" signals.
I think Meta, like all companies, doesn’t want its subcontractors creating bad press for them.
So it doesn’t surprise me that Meta didnt renew/cancelled a contract that is a net negative for them. Arguing over the reason seems fruitless as no reason is needed per the terms of the contract (I assume since breach of contract wasn’t brought up by the sub).
If you want to read more about how unsavory aspects of AI-training are off-loaded onto poor workers in third-world countries, would recommend Karen Hao's "Empire of AI". These workers are paid pennies an hour for unstable jobs that expose them to some horrific material.
Why do they even need workers to classify naked content? They could filter some content prior to passing it to workers. They already have models to moderate explicit content.
Yeah, I think it's more of a British English thing. It can also mean things like "in a fight". Like: "those two guys had a big row outside the pub the other night"
The most important real use case of devices like this is as accessibility tech. Blind people everywhere are talking about devices like this.
It's the same with phones. I know blind people who have been harassed for holding their phones up to things as though they are taking pictures, but in fact they're using the camera on their phone to render signage legible to them, or having their phone (or a person on the other end) read it.
Banning this in a way that doesn't in practice cause problems for visually impaired people would be difficult. It might also be difficult to do in a way that doesn't harm, for instance, accountability for cops who are acting in public.
The impulse to "ban" is sometimes a bit naive imo.
The owner of the private space generally has authority to deny this already, there's no need for an additional law.
In the US at least, any private homeowner/renter can deny entry to their property, barring legal warrants and exceptional circumstances. A business can have a policy, and is generally legally protected as long as the policy is 1) equally applied, and 2) does not violate ADA... A court would have to weigh in if glasses are allowed or not for ADA... but I suspect there's already a case where a movie theater banned such glasses and they would probably(?) win, since such individuals could be expected to have non-recording glasses.
Why? What's the difference between that and one of the many, many concealed camera options that you don't even notice? Just that it's noticeable? I don't think that's a good enough reason for yet-more-regulation. You're already being recorded everywhere you go in public by the authorities, and often by people standing right next to you unnoticed, so just act accordingly.
> What's the difference between that and one of the many, many concealed camera options that you don't even notice?
The latter is literally illegal, at least in my country and I hope in any civilized country. If your point is that there's no difference between glasses and other forms of creep cams and the glasses should be illegal too, I concur!
Because they will be popular and lots of people will buy them and use them all the time, leading to much more generalized surveillance than the concealed options that only a tiny tiny fraction of people would buy or use (and that we should also regulate)
Facebook may have to rename itself into NaughtyBook or SpyBook
or Pr0nBook. They really want people to help them spy on other
people here - including their sex life. Expect new sexy videos
in 3 ... 2 ...
> and was a common practice among other companies.
Meta isn’t lying, you should assume other companies are doing it too, Tesla did it with their cameras, and assume others like any company has access to your camera, I would even assume CCTV cameras too. It’s why for anything sensitive, try to use open source stacks, you might lose some of the features, but it’s a needed compromise.
So I've never had a smart speaker in my house (Alexa, Apple, Google). I've just never been comfortable with the idea of having an always-on cloud-connected microphone in my house. Not because I thought these companies would deliberately start listening and recording in my house but because they will likely be careless with that data and it'll open the door for law enforcement to request it. Consider the Google Wi-fi scraping case from STreetView.
Or they might start scanning for "problematic" behavior, a bit like the Apple CSAM fingerprinting initiative.
So not one part of me would ever buy Meta glasses (or the Snap glasses before that). You simply don't have sufficient control over the recordings and big tech companies can't be trusted, as we've witnessed from outsourced workers sharing explicit images. And I bet that's just the tip of the iceberg.
I honestly don't understand why anyone would get these and trust Meta to manage the risks.
That is to say nothing of the new technological use cases that could develop from the already existing technology. They just haven’t been thought of developed yet.
Things like audio scanning your living space using those Alexa smart speakers with ultrasonics to get an image of not only everything in your space, but where you are in that space as well.
That technological use case only came out within the last five or so years, maybe closer to eight. Either way I could see that coming before it became a thing just because ultrasound imaging of your unborn child is a thing ultrasound imaging of the sea floor is a thing so why wouldn’t ultrasound imaging of your living space be a thing by a company who wants to know what you buy.
I never ever ever had Alexa I only ever had a Google home because I got it for free with GPM but I almost never used it because I hated the idea of it always listening.
I already regret Wi-Fi because they figured out now how to look through walls with that.
Maybe a company with those standards should not get our business. Oops, no wait, maybe they mean the Friedman Doctrine standards? In that case they are entitled to do any and every thing to make a profit. No matter what the harm.
[edit: add last two sentences]
So that when I say that they really do have a zero tolerance policy for anyone using their internal systems to violate user privacy, it's not because I'm eager to defend them. It's just true (at least, it was when I was there). There are internal systems dedicated to making sure you have access to what you need to do your job, and absolutely nothing else. All content you interact with through internal tools is monitored and logged. If you get caught trying to use whatever access your job gives you for anything other than doing your job, security immediately escorts you out of the building. This is drilled into new hires early and often. For everything Meta gets wrong, they really do take this seriously.
The irony is meta wants to implement verification to protect kids. Meanwhile it's doing everything it can to exploit them most at every single level for profit and for the love of the game. Billions of dollars, the world's most advanced computers all dedicated for it
They just got fired for "piercing the veil". They committed the sin of bringing attention to the invasion of privacy.
There was an example in the article where a user’s glasses kept recording the user’s wife after he took them off. That’s bad but on the user, not Facebook.
Seems similar to a situation where someone takes nudes of someone without their consent and then sends them off to a lab to be printed. The lab isn’t doing anything illegal or unethical printing them when they ask the user “are these legal” and the user replies “yes.” Unless you want to stop photo printers from ever printing nudes, I think the responsibility is on the user, not the firm.
OpenAI had them classify CSAM, so Sama fired them as a client back in 2022. https://time.com/6247678/openai-chatgpt-kenya-workers/
We're 4 years on, 3 years since that report broke. Not a single thing has improved about how tech companies operate.
It’s a terrible job, I wouldn’t want to do it, but someone needs to. Perhaps one day, AI will be accurate enough to not need it, but even then you need someone to process complaints and waivers (like someone’s home photos being inaccurately flagged).
Different situation.
Facebook has to do CSAM because it's a publishing platform. People will post CSAM on facebook, so they must do moderation.
And "just don't have facebook" isn't a solution because every publication of any sort has to deal with this problem; Any newspaper accepting mail has this problem. (Albeit to a much more scaled down version) People were nailing obscene things to bulletin boards for all recorded history.
---
In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.
The downside would be "worse LLMs" or "LLMs being created later", which is a perfectly acceptable compromise.
---
This is not to say that genuine content flagging firms have no reason to curate such data & build tools to automatically flag content before human moderators have to.
But OpenAI is not such a firm. It's a general AI company.
The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
These are pretty clear laws established by a democratic government with a pretty good record for rule of law.
Big “citation needed” here. My bet is that Meta have far better moderation systems than any other social media company on the planet.
Safety and user pain is a part of tech which seems largely ignored, even on sites like HN.
I really have no idea why this ignorance prevails; commenters seem to genuinely be unaware of what goes on in Trust and Safety processes.
I mean, most users would complain about content moderation, but their experience would be miles ahead of what most of humanity enjoys when it comes to responsiveness.
I believe this lack of knowledge, examples, and case history is causing a blind spot in tech centric conversations when it comes to the causes of the Techlash.
Unfortunately this backlash is also the perfect cover for authoritarian government action - they come across as responsive to voters while also reigning in firms that are more responsive to American citizens and government officers than their own.
I don't see anyone saying that people don't have the right to use them. I see people saying that they have the right to avoid being anywhere near the people who use them and to disapprove of those people. Which is just as much of a right as the right to wear spy glasses.
I think that's true in principle, but in practice there are going to be two kinds of smart glasses users; extraordinarily annoying kids or you adults acting annoying in public so they can post videos to social media, and then normal people who have no clear sense for how much they're violating the privacy of those around them, and just like cool tech.
Very, very few users are going to be an interesting or valid use case -- eg: someone who is using them to assist with a disability, or for research, or something.
Even most dash cams don't stream to Meta -- they just record the last _n_ hours and you need to know to save off the video if you're in an crash / incident. In other words, most of the time no privacy is violated, and the only potential privacy violation occurs during an incident.
Even policy body cams, which I wholeheartedly support, have some pretty strong downsides: currently, if you're at the end of your rope, having the worst day of your life, and in your dishevelment turn a speeding ticket into a BATLEO, you're famous forever for being a lunatic. Maybe the rest of the time you're a good person, and you can learn from this and move on. Except now you have a permanent albatross around your neck. This is a secondary penalty that the justice system did not intend, and has no answer for.
For parents smart glasses are awesome, no need to pull out a phone to take a picture. No need to view the world through a phone screen.
They are also useful as being regular BT headphones as well. Podcasts while walking w/o tiny earbuds to lose.
It makes a lot of sense for actual accessibility devices to be offline-capable. You don’t want to lose your “sight” when you step into a metal building or elevator.
You then list a mere two categories.
Would your argument have been similar in 2008 if told that in ten years, everyone in the economic first world would be carrying multiple cameras including a dedicated "selfie" camera at all times?
My wider point is that there are already many obvious use cases, and as adoption of cameras which are always on or plausibly always on rises, there will be a lot more, including augmented reality, translation, context hinting, AI agent awareness for assistants and personal security, and at least dozens of others, some of which I am sure no one has started building for, yet.
Meta is probably not the winner in this space (or, I hope not, at least, so we agree there!). However, the idea that people have a right to remember and process what they see and hear in full fidelity is pretty basic, in my opinion.
None of the cameras they discussing, are pointed at people all the time.
When you are wearing Meta glasses, they are.
body cams are local and mostly used by law enforcement to guarantee they are not abusing their power.
glassholes are connected to the cloud. you may have the right to record on public space, i have the right to remain anonymous in the crowd and not be constatly targeted by an advertisement company.
Even if 1% of the corner cases are legit uses (blind people having the glasses describe the world around them is fantastic.) 99% of the people using them are assholes that deserve to be put in the ground and the glasses smashed.
Edit: Not that I would want Meta to get all that data anyway. But even if glasses exist which are more privacy conscious, I think Meta and Google Glass thoroughly ruined the reputation of any kind of wearable like this.
Of course you have to be able to spot that. And trust that it really doesn't record when it's off (note that it simply may be covered by the user)
Also I just googled for what the light actually looks like when it's recording, and it's not even really that visible...
This alone doesn't outweigh all of the negative uses, but I would argue that it's reasonable and legitimate.
But I'd like to have some smart glasses that do respect my privacy and offer this kind of functionality. Honestly, most of the things smart glasses do today are stuff I'd really like. Having my glasses just be the bone conduction headphones I often wear anyways? Check. Easy access to taking photos and short videos of life experiences? Love it. Integrated into the thing I'm often wearing on my head anyways? Perfect.
Do you think you will know if someone has their phone in their pocket or in a holster, and is turned on and recording? You will never know.
There are dozens if not hundreds of cameras pointed at the street that record people every time they go out in public in any urban setting.
At least this says something about the intention. Someone who films with a hidden phone implicitly shows that they intentionally hid this from the people being filmed.
Filming with glasses is hidden by design. It gives plausible deniability to the person filming, so they can film covertly but pretend they weren't hiding anything.
In most cases this doesn't make a difference but there are some cases where the premeditation can make it worse for the person doing the "abusive" filming.
Big assumption here that the place you're on vacation doesn't have different laws. You may have absolutely no right to record "everything and everyone" around you.
Subject to local law. It's an offence to make indecent images of children, for example.
However, it is absolutely not the case that Meta has a right to that data, as a data controller under GDPR.
> feels at risk
This is a red flag phrase: it's a justification that people whip out for all sorts of unjustified things up to and including murder.
Ease off the gas
Oh blind people too. That one makes sense.
But smart glasses that send everything to The Cloud? Burn them all. Especially if they're from fricken' Meta.
Why is it a right?
>Are you going to go after car cameras next?
No. A car cannot follow me into a building very easily. It cannot turn as quickly as a human head.
>Any American who has any opposition to public recording is violating the First Amendment and doesn't even deserve to be an American.
lmao
I do not want my employees recording their day job and selling it, or the creepy dude next to me in the bathroom filming my goods or the log jam flying out of my butt so meta can try to sell me pepto.
I also don't want that one time I did something minor illegal like jay walking get auto fed into palantir so they can ship me to the latest internment camp.
Or someone stealing my biometrics by just walking past me.
None of those default to sharing your recording with anyone else, let alone with no practical way to opt out.
I do not want to live in such a dystopian country. No this right shouldn't exist and I'm glad it doesn't in my country.
> If none of this makes sense to you, wait till standalone cameras become much smaller to where they become a smartbutton -- what will you do then?
Why are you against killing? Wait till you don't need to hit them but can accelerate metal pieces at them -- what will you do then?
> Any American who has any opposition to public recording is fighting the First Amendment and doesn't even deserve to be an American.
Anyone who is against X deserves not to be protected by law. "First they came for the communists..."
Smartphones are illegal in your country? I am skeptical.
The right to record is the right to remember.
Great! Now do people with smart TVs and people with smart phones
Aren’t there already posts and articles on how to ensure that TVs don’t farm information from us?
Calls to stop speaking or interacting with people who use smart glasses sounds like the dumbest thing I've read on HN ever.
There's also nothing stopping us from stigmatizing the use of smartphones in public. Even a slight discouragement of it would be progress. It doesn't have to be all or nothing.
Security cameras afaik usually don't record audio, but all phones can. And they don't even need to be pointed in any specific direction.
On a separate note, (and this is a genuine question) are you by any chance aware the term Non-consensual intimate imagery / NCII?
I am beginning to suspect that the average HN goer isn’t aware of the scope and scale of the Trust and Safety problem.
Smart glasses, however, are always aimed at whatever the wearer is looking at. They may or may not be recording (note the reports of people hiding the LED indicators), and at a fair distance could easily be mistaken for a normal pair.
The general populace is much more likely to notice the former recording rather than the latter.
Just because you don’t notice it doesn’t mean it doesn’t happen.
However, this is still a different thing than smart glasses which can further be segmented into who designed the smart glasses.
It's the camera of their smartphone.
Not sure if it's ON though.
https://www.sciencephoto.com/media/922925/view/three-people-...
https://www.istockphoto.com/nl/foto/happy-woman-using-smart-...
Now, for your "while cycling" qualifier, why does it matter? Again, if you stop to talk to people while recording and it is not obvious you are recording, you're a glasshole. Personally, I have no experience with camera quality from the devices, but I do know what a GoPro can do. My gut instinct is that the GoPro will be superior footage.
I do not care which country the outsourcing company is in. When criminals go global, protection whistleblowers should go global too.
Name a more iconic duo.
Not that I am remotely interested in defending Meta, or optimistic that they would proactively address privacy issues. But I don't feel that sympathetic to the outsourcing company here either.
I don't know what happened behind the scenes. I'm just going off what is said and not said in the article. If I were whistleblowing about something like this, I would take pains to describe what measures I took internally before going public. I didn't see any of that here.
EDIT: Look, to be clear, I think it's bad that naive or uninformed people are buying video recorders from Meta and unintentionally having their private lives intruded on by a company that, based on its history, clearly can't be trusted to be a helpful, transparent partner to customers on privacy. I think it's good that the media is giving people a reminder of this. I think it's good that the sources said something, even though the consequences they suffered seem inevitable. But to me, there is nothing essentially new to be learned here, and I don't know what can or should be done to improve the situation. I think for now, the best thing for people to do is not buy Meta hardware if they have any desire for privacy. Maybe there are laws that could help, but what should be in the laws exactly? It's not obvious to me what would work. I suspect that some of the reason people buy these products is for data capture, and that will sometimes lead to sensitive stuff being recorded. What should the rules be around this and who should decide? Personally I don't know.
Why reflexively defend a massive tech corporation caught repeatedly violating the law?
Congratulations, you have a bright future in politics and/or tech CEOing.
The secondary issue is that it's generally frowned upon to make your employees view nudity in the workplace. Are there extenuating circumstances here? No, we have no evidence there are any extenuating circumstances here.
> Meta said this was for the purpose of improving the customer experience, and was a common practice among other companies.
Am I reading this correctly?! This is probably the weirdest statement I've read on the internet in twenty years.
> Am I reading this correctly?! This is probably the weirdest statement I've read on the internet in twenty years.
It's total fantasy. I've worked in big tech. Casually uploading and providing company/contractor access to non-redacted intimate photos or pictures of the insides of people's homes vaguely "for the purpose of improving the customer experience" would not pass even a surface-level privacy or data-protection review anywhere I've ever worked. Do Meta even read what they are saying?
edit 2: OK, I see what you mean. But I'm wondering if it should be possible to consent to this via T&C. Basically the same issue as with many online services, turned up to 11, sure. And it involves OTHER people, who have not consented.
Stuff like this used to be outrage fuel even when it was more of a social experiment, e.g. the documentary "We live in public" or the "Big brother" TV show. By now, I'm sure there have been millions of influencers doing similar things, but it's very much not considered normal?
Streaming to an unknown number of employees might be considered different from streaming to the public, sure.
But the core question here is whether there's informed consent, and, IMO also, if it should be possible to consent to this when the other party is a company like Meta and the pretext is not deliberately seeking attention (like influencers and streamers do).
edit, clarified social media comparison
1. Meta AFAIR paid/compensated people — contractors or recruited via ads — to have them submit their data. There are strict privacy protocol and reviews in place to distinguish data use in these cases vs gen public. This is not to say the process is perfect, but if these users are gen public, I would be very shocked.
2. Hiring contractors to submit data is a more controlled environment VS recruitment of gen pub via ads to submit data, but the former has more well understood privacy disclosures than the latter. This means in practice asking contractors to wear glasses and "move around their surroundings naturally and do things" goes well with basically the privacy practice "the data your are submitting we can view and use all of it for purpose X and nothing but X". BUT this framing is with ad based recruited people — which are general users who willingly submit data — is much much harder. My suspicion is they are running ad based recruiting in general public and while those users may have signed a privacy statement it is very surprising that they did not tighten the privacy practices around the use of the data and who has access.
… although I really extend that to why are you wearing an internet connected camera that is obviously going to be monitored by Meta.
Of course, anyone who opened a newspaper in the last 10 years or so would know better, but I can definitively see some people not giving a fuck about it.
Because nobody knows how to put a dot of nail polish on an led they don't want seen, right?
Probably this is people asking the glasses something about what they see and the glasses uploading video for classification to generate an answer.
People think it is "just AI" so are not very concerned about privacy.
The thing that really gets me is that internally there are 4 levels of data 1 being public domain shit (the sky is blue) up to 4 which is private user data, or something that is sensitive if leaked or shared.
I was told that by default all user data is level 4, as in if you do anything without decent approval, you're insta fired. There are many stories about at least one person a month during boot camp accessing user data and getting escorted out of the building within hours.
The part where I worked, in visual research, we had to jump through a years worth of legal hoops to get permission to record videos in public. We had to build an anonymisation pipeline, bullet proof audit trail, delete as much data as possible, with auto delete if something went wrong.
We had rigid rule about where that data could be stored and _who_ could access it. We were not allowed to share "wild" footage (ie data that might have the hint of anyone who hadn't signed a contract) for annotation because it would be given to a third party. THe public datasets we released all had traceable people, locations all with legal waivers signed.
Then I hear they just started fucking hosing private data to annotators to _train_ on? without any fucking basic controls at all? Just shows that whenever Zuck or monetsization want something, the rules don't apply.
I look forward to that entire industry collapsing in on it's self.
Which is why I'd never touch a person tech device from Meta.
Their entire DNA is written to exploit their users for profit. In my judgement, they literally cannot and will never consider those issues as anything other than something to obscure to keep people unaware of the depth of the exploitation.
At this scale, this sound like some insider joke contract made up only to make some hustle on the side capitalizing with stock options on the possibility of adhoc news trading bots glitching out on the keyword, here "x.com/sama" signals.
So it doesn’t surprise me that Meta didnt renew/cancelled a contract that is a net negative for them. Arguing over the reason seems fruitless as no reason is needed per the terms of the contract (I assume since breach of contract wasn’t brought up by the sub).
It's the same with phones. I know blind people who have been harassed for holding their phones up to things as though they are taking pictures, but in fact they're using the camera on their phone to render signage legible to them, or having their phone (or a person on the other end) read it.
Banning this in a way that doesn't in practice cause problems for visually impaired people would be difficult. It might also be difficult to do in a way that doesn't harm, for instance, accountability for cops who are acting in public.
The impulse to "ban" is sometimes a bit naive imo.
There is no expectation of privacy in public.
In the US at least, any private homeowner/renter can deny entry to their property, barring legal warrants and exceptional circumstances. A business can have a policy, and is generally legally protected as long as the policy is 1) equally applied, and 2) does not violate ADA... A court would have to weigh in if glasses are allowed or not for ADA... but I suspect there's already a case where a movie theater banned such glasses and they would probably(?) win, since such individuals could be expected to have non-recording glasses.
The latter is literally illegal, at least in my country and I hope in any civilized country. If your point is that there's no difference between glasses and other forms of creep cams and the glasses should be illegal too, I concur!
You are the frog being boiled.
Meta isn’t lying, you should assume other companies are doing it too, Tesla did it with their cameras, and assume others like any company has access to your camera, I would even assume CCTV cameras too. It’s why for anything sensitive, try to use open source stacks, you might lose some of the features, but it’s a needed compromise.
Or they might start scanning for "problematic" behavior, a bit like the Apple CSAM fingerprinting initiative.
So not one part of me would ever buy Meta glasses (or the Snap glasses before that). You simply don't have sufficient control over the recordings and big tech companies can't be trusted, as we've witnessed from outsourced workers sharing explicit images. And I bet that's just the tip of the iceberg.
I honestly don't understand why anyone would get these and trust Meta to manage the risks.
Things like audio scanning your living space using those Alexa smart speakers with ultrasonics to get an image of not only everything in your space, but where you are in that space as well.
That technological use case only came out within the last five or so years, maybe closer to eight. Either way I could see that coming before it became a thing just because ultrasound imaging of your unborn child is a thing ultrasound imaging of the sea floor is a thing so why wouldn’t ultrasound imaging of your living space be a thing by a company who wants to know what you buy.
I never ever ever had Alexa I only ever had a Google home because I got it for free with GPM but I almost never used it because I hated the idea of it always listening.
I already regret Wi-Fi because they figured out now how to look through walls with that.