Before AI, you needed to trust the recipient and the provider (Gmail, Signal, WhatsApp, discord). You could at least make educated guesses about both for the risk profile. Such as: if someone leaks the code to this repo, it’s likely a collaborator or GitHub.
Today, you invite someone to a private repo and the code gets exfiltrated by a collaborator running whatever AI tool simply by opening their IDE.
Or you send someone an e2ee message on Signal but their AI reads the screen/text to summarize and now that message is exfiltrated.
Yes, I know it’s ”nothing new” ”in principle this could happen because you don’t control the client”. But opsec is also about what happens when well-meaning participants being accomplices in data collection. I used to trust that my friends enough to not share our conversations. Now the default assumption is that text & media on even private messaging will be harvested.
Personally I’m not ever giving keys to the kingdom to a remote data-hungry company, no matter how reputable. I’ll reconsider when local or self-hosted AI is available.
> used to trust that my friends enough to not share our conversations. Now the default assumption is that text & media on even private messaging will be harvested
I would seriously reëvaluate my trust level in a friend or colleague who installs a non-ADA screen reader on their phone. At least to the level of sharing anything sensitive.
What about when devices come with such a "feature" baked in? Android has Magic Cue, Windows has Recall. How long until they're opt-out, or "accidentally" enabled with an update, or just on at all times? And "sensitive" can be wherever details I want to share with that friend. It can be as benign as giving them an address or phone number, or maybe a medical diagnosis, or a crypto wallet number.
Is your position that anyone who's not tech savvy enough to constantly fight the onslaught of shady business practices and dark patterns that most tech companies throw at them is not worthy of their friends' trust?
For most people asking them to guarantee their own devices won't spy on them is a tall order.
> Is your position that anyone who's not tech savvy enough to constantly fight the onslaught of shady business practices and dark patterns that most tech companies throw at them is not worthy of their friends' trust?
Trust is a function of character and competence. Not understanding how your technology may be compromising you is, within the scope of keeping secrets, a fracture of competence.
I can’t repair a car. My friends would be correct in not trusting me to go under their cars’ hoods unsupervised. Similarly, a friend or colleague who cannot be trusted to understand the device they’re using cannot be trusted with matters of confidence in that context.
> I can’t repair a car. My friends would be correct in not trusting me to go under their cars’ hoods unsupervised
What a let down of an answer. Who said anything about "going under the hood"? This is about simply using a device. Unfortunately the control of your phone is shared with a manufacturer or OS developer with shady practices and interests that don't align with yours. You are tasked with operating a device that is more than occasionally actively hostile and subversive towards you, the owner and user.
You probably don't fully understand almost any of the things you will ever interact with in your entire life. When some of those things will betray you I bet you won't find it a matter of your incompetence. Come to think about it, one day you'll realize you lived long enough to screw something up in almost every area you touched.
> who cannot be trusted to understand the device they’re using
In my previous comment I made it crystal clear that this is about using a device and gave concrete examples of dark patterns that would challenge even an expert. And you still misunderstood, and still wrongly assume that it's a matter of "competence". That's a fracture of competence if I've ever seen one.
Like a victim of a robbery in a bad neighborhood showed a fracture of competence by not understanding bad-neighborhood-dynamics.
Assuming the courts simplify Otter AI down to being a glorified call recording and transcribing tool (because the fact it's "AI" isn't really relevant here w.r.t. privacy/one/two-party-consent rules then doesn't the legal responsibility here lie with whichever person added Otter AI to group-calls without informing the other members?
----
EDIT: So the crux of the matter is whether-or-not having Otter AI automatically join meetings via their Slack/Zoom/etc integrations is by-itself legally wrong - or not:
> "In fact, if the meeting host is an Otter accountholder who has integrated their relevant Google Meet, Zoom, or Microsoft Teams accounts with Otter, an Otter Notetaker may join the meeting without obtaining the affirmative consent from any meeting participant, including the host," the lawsuit alleges. "What Otter has done is use its Otter Notetaker meeting assistant to record, transcribe, and utilize the contents of conversations without the Class members' informed consent."
I'm surprised the NPR article doesn't touch on the possible liability of whoever added Otter in the first place - surely the buck stops there?
>doesn't the legal responsibility here lie with whichever person added Otter AI to group-calls without informing the other members
IANAL but companies providing a product has certain responsibilities too, especially when they're intended to be used for a given purpose (ie. recording meetings with other people on it). Most call recording software I come across have a recording notice that can't be disabled, presumably to avoid lawsuits like this.
>EDIT: So the crux of the matter is whether-or-not having Otter AI automatically join meetings via their Slack/Zoom/etc integrations is by-itself legally wrong - or not:
Note the preceding paragraph also notes that even when the integrations aren't used, otter only obtains consent from the meeting host. In all-party consent states that's clearly not sufficient.
>because the fact it's "AI" isn't really relevant here
Again, IANAL, but "recording" laws might not apply if they're merely transcribing the audio? To take an extreme case, it's (probably) legal to hire a stenographer to sit next to you on meetings and transcribe everything on the call, even if you don't tell any other participants. Otter is a note-taking app, so they might have been in the clear if they weren't recording for AI training.
I had a conversation with a lawyer who had invited OtterAI to our confidential meeting. I was gobsmacked, and I quickly read Otter's privacy statement -- my impression was that they retain your data in a cloud service and use your "anonymized" (or was it "depersonalized"?) recordings as future training data. Even if they have a bona fide reason for all that, I question their ability to store the data securely and succeed in anonymizing data that contains unique identifiers that could be tied to future court records. I refused to continue in the presence of the bot.
And, even beyond security is their ability to hold promises made over the data in the event of a private equity takeover, a rogue employee, etc.
Apparently, I am "inviting" Otter into very private work meetings and i don't even have an account. Someone else had it on a meeting and Otter took it upon itself to send notes to all people in the meeting...from "me". It said i was inviting them. i am being trolled by an ai bot and spreading it like a disease to others and i didn't even know it until i saw the invitation from me in a meeting i was in...and that's when i first learned it attached itself to me. This is like a virus and i am trying to figure out how to stop it.
I've been getting Otter AI ads on the various ad-supported streaming services I watch. The ad shows a scenario where a couple of people are tapped for a last-minute meeting, but they've got other things to attend to (lunch, PTO), and they just have Otter sit in their place in the meeting.
I may be a dinosaur, but I was shocked at how casual they made this look (I know, it's just an ad), but I would be fired almost instantly at $ENTERPRISE if I did this. I almost looks like it's designed for corporate espionage.
I think we should have an opt-out standard via a subsonic signal, like DO NOT TRACK in browsers. Then, it's on the vendors to intentionally ignore a clear signal.
To that end, I've been working on opening sourcing https://dontrecord.me as a side project. Putting together a fork of Whisper that will follow the opt-out signal, too. If anyone one wants to help, please connect.
That’s the problem though, getting the email with a copy of the recording may be the unwitting participants first indication that the call was recorded without their knowledge or consent.
Otters defense is that it’s up to their users to inform other participants and get their consent where necessary, the claim of the lawsuit is that Otter is deliberately making a product which does not make it obvious that the call is being recorded, and by default does not send a pre-meeting notice that it will be joining and recording.
I remember being on sensitive zoom calls and seeing Otter.ai join. Had to track down which person was using it, and even they were clueless as to how it got there, and the client kept rejoining despite the user trying to stop it.
I've never used this service so I don't know if the user was being particularly clueless or if some dark pattern was at play; I suspect it's probably a little bit of both.
You knew what to look for, and were presumably logged into a device which showed you participants... now imagine if you were on the go, or not very technical so joined from a phone using a phone # and a meeting code... you'd never see it or be aware. I contend Otter knows exactly what they are doing and hopefully this lawsuit gets them to get a bit more transparent and law abiding.
If it is just a tool then a tape recorder company should be liable as well. It doesn't do any sort of notice or make it obvious that someone is being recorded.
If the tape recorder were to automatically start recording whenever a meeting started and silently add itself to physical rooms unexpectedly... you might have a point.
The tape recorder manufacturer also doesn’t claim the right to permanently own anything it’s users record, with or without permission.
I’ve used Otter to record convos without it joining the meeting.
Notion’s AI meeting recording also works without any participants being alerted. Same with Limitless.ai (check out the limitless pendant for the most extreme example of no consent recording)
Most of these AI meeting recording services make it easy to silently/secretly record. No consent seems to be the default UX.
I’m still not sure what we’re talking about. When I turn on recording in Zoom, a booming voice announces that “this meeting is being recorded” to all participants. In older conferencing systems you’d get similar recorded warnings and sometimes loud beeps every so often.
Services like Notion, Otter, Limitless, etc all provide the ability to record without having anything visible in the meeting. It doesn’t rely on Zoom’s built-in recording feature. Meaning the meeting will appear as if there’s no recording taking place.
They work via an app you install on your computer which hooks in to your computer’s audio in/out.
I don't understand why this relatively simple conversation is taking so long. Let me summarize:
1. Standard built-in meeting recording tools (like Zoom, Meet and Teams) all give a warning to other participants, and have done so for years. These are the gold standard for what users expect from recording-enabled tools, because they've been (essentially) the primary game in town and I can bet that everyone reading this conversation has encountered the warning at least once before Otter etc. came along.
2. If you want to build your own meeting recording tool that joins the meeting as a "participant" (or as an add-on to a real participant's own machine) then it seems like a good idea to try to mimic the same warning UX for the other participants. At worst you're following "best practices" and at best you're insulating yourself from a lawsuit like this one.
3. I don't know what you're talking about when you mention Otter (the subject of TFA) because I clearly see Otter notetakers join as a participant. Or per the docs: "Otter Notetaker can automatically join your Zoom, Google Meet, or Microsoft Teams meetings and transcribe your meetings in real-time." I can't see any reason why, as a separate participant, they would be technically unable to display a written warning on the video feed and/or emit an audio warning.
4. From perusing Otter's site, it looks like you can also upload a Zoom recording (which will cause Zoom to emit the standard warning) or you can sync in an Otter app, which requires Zoom Admin and presumably has capabilities like "warn the other users" since the integration is pretty tight.
5. Even if the software is running purely locally, I imagine there are ways to insert a warning into the meeting. Maybe in a few cases there is some technical reason it's absolutely impossible that I don't understand! But frankly, I do not believe that the lack of warning is purely due to technical limitations, since Otter Zoom participants also don't recite one. I'm fairly certain that even tools that purely use local feeds, there is likely a way to emit this warning to other users if you want to badly enough.
6. If this is a purely technical argument about the limitations of the product used in that one specific case, then let's have it.
I’m also not sure why you’re not seeing the main issue.
Common meeting recording tools do not require consent, and the UX promotes a UX that discourages consent.
Just today, Notion launched a new feature in their desktop app which causes a pop up on your desktop whenever you’re about to join a Google Meet. All it says it “Would you like to record this meeting? Please ensure you gather consent from attendees”
Of course the consent gathering isn’t enforced, and it records and transcribes silently in the background without joining the meeting.
I understand it’s technically possible to do what you’re saying. I also understand the built-in recording of Google Meet, etc, all make it obvious that the meeting is recorded.
The only point I’m trying to make is it’s extremely common for meeting recording tools to not join meetings and not ask for consent directly. If you ignore/deny this is happening, I fear you’re fooling yourself into thinking you’re not being recorded when in fact you very well might be, unknowingly, by any of your meeting attendees.
> Last year, an AI researcher and engineer said Otter had recorded a Zoom meeting with investors, then shared with him a transcription of the chat including "intimate, confidential details" about a business discussed after he had left the meeting. Those portions of the conversation ended up killing a deal,
I'm sorry but this is another example of not checking AI's work. Whatever about the excessive recording, that's one thing, but blindly trusting the AI's output and then using it blindly as a company document for a client is on you.
I checked the original tweet to try and understand this better and what appears to have happened is that Otter kept recording after he left and the VCs stayed on the call chatting (for hours, according to the tweet). This violates the assumption baked into the recording agent (all participants of the call have a right to a transcript of the whole call) by repurposing a scheduled meeting into a party line/just chatting sort of situation.
You could fix this by training people not to use booked meetings this way but I'm not sure how realistic that is to do. I think it might be that services like Otter need to be adjusted to take into account that not every part of a meeting is of equal sensitivity.
i.e. my HOA's monthly meetings have a private period for the board only and a public period for all residents. If Otter were used in this configuration, it would broadcast the exact details of those private discussions to the whole building, which might include board members discussing details that shouldn't be shared with everyone.
> I think it might be that services like Otter need to be adjusted to take into account that not every part of a meeting is of equal sensitivity.
One would like to think that a company transcribing company meetings of varying degrees of sensitivity would have the feature you're describing built into it. If nothing else other than the auditing process that's usually involved for new software.
Maybe just those companies in a rush to adopt the latest AI tooling are not fully considering what they're doing.
That’s part of it too, the bigger issue is the knowledge and consent of other participants.
I know someone who is involved in a lawsuit regarding a child, and one of the lawyers used this service to record and transcribe a very confidential meeting. Their first awareness of the illegal wiretapping by this company was when a summary email showed up at the end of the meeting. Needless to say, they weren’t happy, not just about the surreptitious recording, but also the discovery that the contents of that confidential coal will live forever in Otter’s training set. When the company was asked about this, they dismissed any kind of responsibility of their own, and noted it the responsibility of their subscribers to use the product appropriately.
This just seems like massive user error. The same thing could have happened in a low tech environment. And the notetaker just made it more obvious.
Ex: Hop on a conference call with a group of people, Person A "leaves early" but doesn't hang up the phone, then the remaining group talks about sensitive info they didn't want Person A to hear.
> Person A "leaves early" but doesn't hang up the phone, then the remaining group talks about sensitive info they didn't want Person A to hear.
I'm sorry but any conference software will make it extremely clear who is still on the call. Again I do put a lot of this scenario down to the User-fault. But the fact that this software is "always on" instead of "activated/deactivated" feels like incomplete software suite to me personally.
On internet / app based systems yes ... but on legacy telephone systems you have to remember all 16 of the '<Person> is joining the call' and mentally check them off when you get the '<Person> is leaving the call' on the way out. Of course you have no idea who joined the meeting before you arrived.
You didnt even have to make the mistake once to know not to keep talking on the call anyone can dial into after you think everyone left.
A month ago a potential customer automatically included their Otter.ai meeting agent into a Teams call. The customer never turned up (he canceled the meeting somewhat later), but me and a colleague chatted a bit in the meeting. Then the Otter.ai meeting agent posted a link in the chat, from which it was clear that everything had been recorded, up to a complete video of the meeting with full facial imagery.
As I'm a European citizen, I filed a GDPR removal request with them to remove all images of me from their servers. The email address that they list in their privacy policy [1] for GDPR requests immediately bounces and tells you to reply from an Otter.ai account (which I don't have). I was able to fill in a contact form on their website and I did receive replies via email after that.
After a few emails back and forth, their position is that
> You will need to reach out to the conversation owner directly to request to have your information deleted/removed. Audio and screenshots created by the user are under the control of the user, not Otter.
> We are required by law to deny any request to delete personal information that may be contained within a recording or screenshot created by another user under the CCPA, Cal. Civil Code § 1798.145(k), which states in relevant part
> “The rights afforded to consumers and the obligations imposed on the business in this title shall not adversely affect the rights and freedoms of other natural persons. A verifiable consumer request…to delete a consumer’s personal information pursuant to Section 1798.105…shall not extend to personal information about the consumer that belongs to, or the business maintains on behalf of, another natural person…[A] business is under no legal obligation under this title or any other provision of law to take any action under this title in the event of a dispute between or among persons claiming rights to personal information in the business’ possession.”
Which is a ridiculous answer towards a European user, as the CCPA doesn't apply to me at all. Furthermore, I don't think the CCPA prohibits them at all in deleting my face from their servers, as the CCPA merely stipulates that I can't compel them under the CCPA. Otter.ai can perfectly decide this for themselves or be compelled under the GDPR to delete data, and their Terms and Conditions make it clear they may delete any user or data if they wish to do so.
After these emails, and me threatening to file a lawsuit, "Andrew" from "Otter.ai Support Team" promised to escalate the matter to his manager, but I got ghosted after that: they simply stopped replying.
So I'm going to file that lawsuit (a "verzoekschriftprocedure" under Dutch law) this week. It's going to be a very short complaint.
And out of nowhere, after posting this comment, Otter.ai now has responded after ghosting me for 3,5 weeks. They are no longer quoting the CCPA, but now are misinterpreting the GDPR and claim that every user is their own little GDPR data controller island and they're merely a "hosting platform". It's all very convenient and creative.
Their response:
Thank you for reaching out to Otter.ai. Under Articles 12 and 17 of the GDPR, Otter.ai is able to delete personal data that is stored in and controlled by your own account. However, Otter.ai cannot delete personal data that is stored in another user’s account. In those cases, Otter.ai acts as the processor or hosting platform, and the other user is the controller for that content. As such, only that account holder has the authority to remove the content.
If you wish to have such data deleted, we recommend that you contact the relevant user directly and exercise your rights under the GDPR with them.
Thank you,
Otter.ai Privacy Team
To which I responded:
To whom am I speaking? Is this the Privacy Officer? Why have you been ignoring emails for 3,5 weeks since the 23rd of July, while a GDPR request was filed on the 8th of July?
You know very well that a meeting agent of Otter.ai, the emails by Otter.ai and the website of Otter.ai fall under the direct responsibility of Otter.ai as data controller. Your privacy statement in no way supports a narrative that Otter.ai would act as a so called "hosting platform". It's preposterous to suggest that every one of your users – not being a company but a private person – would be it's own little GDPR data controller island and you're merely an accidental processor of data. Jurisprudence is very clear on this and this notion will be outright rejected.
The deadline has long passed, I'm initiating a court procedure this week.
Hoogachtend,
What curious timing! Glad you're using your rights to punish this company. A coworker at a prior company used Otter.ai once or twice, and from then on we all called it the Otter Infection until IT was able to purge it from our systems somehow. It kept getting into meetings it had no business getting into.
Before AI, you needed to trust the recipient and the provider (Gmail, Signal, WhatsApp, discord). You could at least make educated guesses about both for the risk profile. Such as: if someone leaks the code to this repo, it’s likely a collaborator or GitHub.
Today, you invite someone to a private repo and the code gets exfiltrated by a collaborator running whatever AI tool simply by opening their IDE.
Or you send someone an e2ee message on Signal but their AI reads the screen/text to summarize and now that message is exfiltrated.
Yes, I know it’s ”nothing new” ”in principle this could happen because you don’t control the client”. But opsec is also about what happens when well-meaning participants being accomplices in data collection. I used to trust that my friends enough to not share our conversations. Now the default assumption is that text & media on even private messaging will be harvested.
Personally I’m not ever giving keys to the kingdom to a remote data-hungry company, no matter how reputable. I’ll reconsider when local or self-hosted AI is available.
> used to trust that my friends enough to not share our conversations. Now the default assumption is that text & media on even private messaging will be harvested
I would seriously reëvaluate my trust level in a friend or colleague who installs a non-ADA screen reader on their phone. At least to the level of sharing anything sensitive.
What about when devices come with such a "feature" baked in? Android has Magic Cue, Windows has Recall. How long until they're opt-out, or "accidentally" enabled with an update, or just on at all times? And "sensitive" can be wherever details I want to share with that friend. It can be as benign as giving them an address or phone number, or maybe a medical diagnosis, or a crypto wallet number.
Is your position that anyone who's not tech savvy enough to constantly fight the onslaught of shady business practices and dark patterns that most tech companies throw at them is not worthy of their friends' trust?
For most people asking them to guarantee their own devices won't spy on them is a tall order.
> Is your position that anyone who's not tech savvy enough to constantly fight the onslaught of shady business practices and dark patterns that most tech companies throw at them is not worthy of their friends' trust?
Trust is a function of character and competence. Not understanding how your technology may be compromising you is, within the scope of keeping secrets, a fracture of competence.
I can’t repair a car. My friends would be correct in not trusting me to go under their cars’ hoods unsupervised. Similarly, a friend or colleague who cannot be trusted to understand the device they’re using cannot be trusted with matters of confidence in that context.
> I can’t repair a car. My friends would be correct in not trusting me to go under their cars’ hoods unsupervised
What a let down of an answer. Who said anything about "going under the hood"? This is about simply using a device. Unfortunately the control of your phone is shared with a manufacturer or OS developer with shady practices and interests that don't align with yours. You are tasked with operating a device that is more than occasionally actively hostile and subversive towards you, the owner and user.
You probably don't fully understand almost any of the things you will ever interact with in your entire life. When some of those things will betray you I bet you won't find it a matter of your incompetence. Come to think about it, one day you'll realize you lived long enough to screw something up in almost every area you touched.
> who cannot be trusted to understand the device they’re using
In my previous comment I made it crystal clear that this is about using a device and gave concrete examples of dark patterns that would challenge even an expert. And you still misunderstood, and still wrongly assume that it's a matter of "competence". That's a fracture of competence if I've ever seen one.
Like a victim of a robbery in a bad neighborhood showed a fracture of competence by not understanding bad-neighborhood-dynamics.
This sounds awfully like blaming individuals for not being able to fix a systemic problem on their own.
Assuming the courts simplify Otter AI down to being a glorified call recording and transcribing tool (because the fact it's "AI" isn't really relevant here w.r.t. privacy/one/two-party-consent rules then doesn't the legal responsibility here lie with whichever person added Otter AI to group-calls without informing the other members?
----
EDIT: So the crux of the matter is whether-or-not having Otter AI automatically join meetings via their Slack/Zoom/etc integrations is by-itself legally wrong - or not:
> "In fact, if the meeting host is an Otter accountholder who has integrated their relevant Google Meet, Zoom, or Microsoft Teams accounts with Otter, an Otter Notetaker may join the meeting without obtaining the affirmative consent from any meeting participant, including the host," the lawsuit alleges. "What Otter has done is use its Otter Notetaker meeting assistant to record, transcribe, and utilize the contents of conversations without the Class members' informed consent."
I'm surprised the NPR article doesn't touch on the possible liability of whoever added Otter in the first place - surely the buck stops there?
>doesn't the legal responsibility here lie with whichever person added Otter AI to group-calls without informing the other members
IANAL but companies providing a product has certain responsibilities too, especially when they're intended to be used for a given purpose (ie. recording meetings with other people on it). Most call recording software I come across have a recording notice that can't be disabled, presumably to avoid lawsuits like this.
>EDIT: So the crux of the matter is whether-or-not having Otter AI automatically join meetings via their Slack/Zoom/etc integrations is by-itself legally wrong - or not:
Note the preceding paragraph also notes that even when the integrations aren't used, otter only obtains consent from the meeting host. In all-party consent states that's clearly not sufficient.
>because the fact it's "AI" isn't really relevant here
Again, IANAL, but "recording" laws might not apply if they're merely transcribing the audio? To take an extreme case, it's (probably) legal to hire a stenographer to sit next to you on meetings and transcribe everything on the call, even if you don't tell any other participants. Otter is a note-taking app, so they might have been in the clear if they weren't recording for AI training.
I had a conversation with a lawyer who had invited OtterAI to our confidential meeting. I was gobsmacked, and I quickly read Otter's privacy statement -- my impression was that they retain your data in a cloud service and use your "anonymized" (or was it "depersonalized"?) recordings as future training data. Even if they have a bona fide reason for all that, I question their ability to store the data securely and succeed in anonymizing data that contains unique identifiers that could be tied to future court records. I refused to continue in the presence of the bot.
And, even beyond security is their ability to hold promises made over the data in the event of a private equity takeover, a rogue employee, etc.
[dead]
Apparently, I am "inviting" Otter into very private work meetings and i don't even have an account. Someone else had it on a meeting and Otter took it upon itself to send notes to all people in the meeting...from "me". It said i was inviting them. i am being trolled by an ai bot and spreading it like a disease to others and i didn't even know it until i saw the invitation from me in a meeting i was in...and that's when i first learned it attached itself to me. This is like a virus and i am trying to figure out how to stop it.
I've been getting Otter AI ads on the various ad-supported streaming services I watch. The ad shows a scenario where a couple of people are tapped for a last-minute meeting, but they've got other things to attend to (lunch, PTO), and they just have Otter sit in their place in the meeting.
I may be a dinosaur, but I was shocked at how casual they made this look (I know, it's just an ad), but I would be fired almost instantly at $ENTERPRISE if I did this. I almost looks like it's designed for corporate espionage.
This is not why or how 99% of people use Otter. If that’s implied in their marketing, then their marketing team is terrible.
Tell HN: Otter.ai bot recording meetings without consent (2022) - 176 comments
https://news.ycombinator.com/item?id=32751071
I think we should have an opt-out standard via a subsonic signal, like DO NOT TRACK in browsers. Then, it's on the vendors to intentionally ignore a clear signal.
To that end, I've been working on opening sourcing https://dontrecord.me as a side project. Putting together a fork of Whisper that will follow the opt-out signal, too. If anyone one wants to help, please connect.
I cant believe I am going to say this - but I am on the Ai company side.
They recorded the call and sent it to all participants. Its not their fault the users are idiots.
That’s the problem though, getting the email with a copy of the recording may be the unwitting participants first indication that the call was recorded without their knowledge or consent.
Otters defense is that it’s up to their users to inform other participants and get their consent where necessary, the claim of the lawsuit is that Otter is deliberately making a product which does not make it obvious that the call is being recorded, and by default does not send a pre-meeting notice that it will be joining and recording.
I remember being on sensitive zoom calls and seeing Otter.ai join. Had to track down which person was using it, and even they were clueless as to how it got there, and the client kept rejoining despite the user trying to stop it.
I've never used this service so I don't know if the user was being particularly clueless or if some dark pattern was at play; I suspect it's probably a little bit of both.
You knew what to look for, and were presumably logged into a device which showed you participants... now imagine if you were on the go, or not very technical so joined from a phone using a phone # and a meeting code... you'd never see it or be aware. I contend Otter knows exactly what they are doing and hopefully this lawsuit gets them to get a bit more transparent and law abiding.
If it is just a tool then a tape recorder company should be liable as well. It doesn't do any sort of notice or make it obvious that someone is being recorded.
If the tape recorder were to automatically start recording whenever a meeting started and silently add itself to physical rooms unexpectedly... you might have a point.
The tape recorder manufacturer also doesn’t claim the right to permanently own anything it’s users record, with or without permission.
I mean, the standard product experience here is that everyone gets a visual or audio warning that the meeting is being recorded.
Are you sure?
I’ve used Otter to record convos without it joining the meeting.
Notion’s AI meeting recording also works without any participants being alerted. Same with Limitless.ai (check out the limitless pendant for the most extreme example of no consent recording)
Most of these AI meeting recording services make it easy to silently/secretly record. No consent seems to be the default UX.
No, I meant the standard product experience for meeting recording tools.
Right. The standard UX does not include anything that would indicate to other attendees that the meeting is being recorded.
I’m still not sure what we’re talking about. When I turn on recording in Zoom, a booming voice announces that “this meeting is being recorded” to all participants. In older conferencing systems you’d get similar recorded warnings and sometimes loud beeps every so often.
That’s if you’re using Zoom’s built-in recording.
Services like Notion, Otter, Limitless, etc all provide the ability to record without having anything visible in the meeting. It doesn’t rely on Zoom’s built-in recording feature. Meaning the meeting will appear as if there’s no recording taking place.
They work via an app you install on your computer which hooks in to your computer’s audio in/out.
I don't understand why this relatively simple conversation is taking so long. Let me summarize:
1. Standard built-in meeting recording tools (like Zoom, Meet and Teams) all give a warning to other participants, and have done so for years. These are the gold standard for what users expect from recording-enabled tools, because they've been (essentially) the primary game in town and I can bet that everyone reading this conversation has encountered the warning at least once before Otter etc. came along.
2. If you want to build your own meeting recording tool that joins the meeting as a "participant" (or as an add-on to a real participant's own machine) then it seems like a good idea to try to mimic the same warning UX for the other participants. At worst you're following "best practices" and at best you're insulating yourself from a lawsuit like this one.
3. I don't know what you're talking about when you mention Otter (the subject of TFA) because I clearly see Otter notetakers join as a participant. Or per the docs: "Otter Notetaker can automatically join your Zoom, Google Meet, or Microsoft Teams meetings and transcribe your meetings in real-time." I can't see any reason why, as a separate participant, they would be technically unable to display a written warning on the video feed and/or emit an audio warning.
4. From perusing Otter's site, it looks like you can also upload a Zoom recording (which will cause Zoom to emit the standard warning) or you can sync in an Otter app, which requires Zoom Admin and presumably has capabilities like "warn the other users" since the integration is pretty tight.
5. Even if the software is running purely locally, I imagine there are ways to insert a warning into the meeting. Maybe in a few cases there is some technical reason it's absolutely impossible that I don't understand! But frankly, I do not believe that the lack of warning is purely due to technical limitations, since Otter Zoom participants also don't recite one. I'm fairly certain that even tools that purely use local feeds, there is likely a way to emit this warning to other users if you want to badly enough.
6. If this is a purely technical argument about the limitations of the product used in that one specific case, then let's have it.
I’m also not sure why you’re not seeing the main issue.
Common meeting recording tools do not require consent, and the UX promotes a UX that discourages consent.
Just today, Notion launched a new feature in their desktop app which causes a pop up on your desktop whenever you’re about to join a Google Meet. All it says it “Would you like to record this meeting? Please ensure you gather consent from attendees”
Of course the consent gathering isn’t enforced, and it records and transcribes silently in the background without joining the meeting.
I understand it’s technically possible to do what you’re saying. I also understand the built-in recording of Google Meet, etc, all make it obvious that the meeting is recorded.
The only point I’m trying to make is it’s extremely common for meeting recording tools to not join meetings and not ask for consent directly. If you ignore/deny this is happening, I fear you’re fooling yourself into thinking you’re not being recorded when in fact you very well might be, unknowingly, by any of your meeting attendees.
You aren't talking about the standard experience product experience with Otter.ai, are you? Hard to tell.
That's the crux of the article.
I’m talking about the standard product experience for meeting recordings, using the built-in tools. Why would you do something different?
> Last year, an AI researcher and engineer said Otter had recorded a Zoom meeting with investors, then shared with him a transcription of the chat including "intimate, confidential details" about a business discussed after he had left the meeting. Those portions of the conversation ended up killing a deal,
I'm sorry but this is another example of not checking AI's work. Whatever about the excessive recording, that's one thing, but blindly trusting the AI's output and then using it blindly as a company document for a client is on you.
I checked the original tweet to try and understand this better and what appears to have happened is that Otter kept recording after he left and the VCs stayed on the call chatting (for hours, according to the tweet). This violates the assumption baked into the recording agent (all participants of the call have a right to a transcript of the whole call) by repurposing a scheduled meeting into a party line/just chatting sort of situation.
You could fix this by training people not to use booked meetings this way but I'm not sure how realistic that is to do. I think it might be that services like Otter need to be adjusted to take into account that not every part of a meeting is of equal sensitivity.
i.e. my HOA's monthly meetings have a private period for the board only and a public period for all residents. If Otter were used in this configuration, it would broadcast the exact details of those private discussions to the whole building, which might include board members discussing details that shouldn't be shared with everyone.
> I think it might be that services like Otter need to be adjusted to take into account that not every part of a meeting is of equal sensitivity.
One would like to think that a company transcribing company meetings of varying degrees of sensitivity would have the feature you're describing built into it. If nothing else other than the auditing process that's usually involved for new software.
Maybe just those companies in a rush to adopt the latest AI tooling are not fully considering what they're doing.
That’s part of it too, the bigger issue is the knowledge and consent of other participants.
I know someone who is involved in a lawsuit regarding a child, and one of the lawyers used this service to record and transcribe a very confidential meeting. Their first awareness of the illegal wiretapping by this company was when a summary email showed up at the end of the meeting. Needless to say, they weren’t happy, not just about the surreptitious recording, but also the discovery that the contents of that confidential coal will live forever in Otter’s training set. When the company was asked about this, they dismissed any kind of responsibility of their own, and noted it the responsibility of their subscribers to use the product appropriately.
This just seems like massive user error. The same thing could have happened in a low tech environment. And the notetaker just made it more obvious.
Ex: Hop on a conference call with a group of people, Person A "leaves early" but doesn't hang up the phone, then the remaining group talks about sensitive info they didn't want Person A to hear.
> Person A "leaves early" but doesn't hang up the phone, then the remaining group talks about sensitive info they didn't want Person A to hear.
I'm sorry but any conference software will make it extremely clear who is still on the call. Again I do put a lot of this scenario down to the User-fault. But the fact that this software is "always on" instead of "activated/deactivated" feels like incomplete software suite to me personally.
> who is still on the call
On internet / app based systems yes ... but on legacy telephone systems you have to remember all 16 of the '<Person> is joining the call' and mentally check them off when you get the '<Person> is leaving the call' on the way out. Of course you have no idea who joined the meeting before you arrived.
You didnt even have to make the mistake once to know not to keep talking on the call anyone can dial into after you think everyone left.
Why does Otter AI exist? Aren't those features built-in to videoconferencing now?
Most of the ones built into the video conferencing solutions aren't as good.
Zoom is aight
A month ago a potential customer automatically included their Otter.ai meeting agent into a Teams call. The customer never turned up (he canceled the meeting somewhat later), but me and a colleague chatted a bit in the meeting. Then the Otter.ai meeting agent posted a link in the chat, from which it was clear that everything had been recorded, up to a complete video of the meeting with full facial imagery.
As I'm a European citizen, I filed a GDPR removal request with them to remove all images of me from their servers. The email address that they list in their privacy policy [1] for GDPR requests immediately bounces and tells you to reply from an Otter.ai account (which I don't have). I was able to fill in a contact form on their website and I did receive replies via email after that.
After a few emails back and forth, their position is that
> You will need to reach out to the conversation owner directly to request to have your information deleted/removed. Audio and screenshots created by the user are under the control of the user, not Otter.
> We are required by law to deny any request to delete personal information that may be contained within a recording or screenshot created by another user under the CCPA, Cal. Civil Code § 1798.145(k), which states in relevant part
> “The rights afforded to consumers and the obligations imposed on the business in this title shall not adversely affect the rights and freedoms of other natural persons. A verifiable consumer request…to delete a consumer’s personal information pursuant to Section 1798.105…shall not extend to personal information about the consumer that belongs to, or the business maintains on behalf of, another natural person…[A] business is under no legal obligation under this title or any other provision of law to take any action under this title in the event of a dispute between or among persons claiming rights to personal information in the business’ possession.”
Which is a ridiculous answer towards a European user, as the CCPA doesn't apply to me at all. Furthermore, I don't think the CCPA prohibits them at all in deleting my face from their servers, as the CCPA merely stipulates that I can't compel them under the CCPA. Otter.ai can perfectly decide this for themselves or be compelled under the GDPR to delete data, and their Terms and Conditions make it clear they may delete any user or data if they wish to do so.
After these emails, and me threatening to file a lawsuit, "Andrew" from "Otter.ai Support Team" promised to escalate the matter to his manager, but I got ghosted after that: they simply stopped replying.
So I'm going to file that lawsuit (a "verzoekschriftprocedure" under Dutch law) this week. It's going to be a very short complaint.
[1] https://otter.ai/privacy-policy
And out of nowhere, after posting this comment, Otter.ai now has responded after ghosting me for 3,5 weeks. They are no longer quoting the CCPA, but now are misinterpreting the GDPR and claim that every user is their own little GDPR data controller island and they're merely a "hosting platform". It's all very convenient and creative.
Their response:
To which I responded:What curious timing! Glad you're using your rights to punish this company. A coworker at a prior company used Otter.ai once or twice, and from then on we all called it the Otter Infection until IT was able to purge it from our systems somehow. It kept getting into meetings it had no business getting into.
That's terrible customer service and irresponsible of them. I find it wild that the bot would join without the user present.
They're not a European company.
How is that relevant? You seem to be assuming GDPR doesn’t apply to US companies.
Their own privacy policy acknowledges their obligations under the GDPR.
Wait until they find out about Granola AI
[dead]