23 and Me are technically correct in that it's customer behaviour that caused the issue. People reused passwords and didn't use MFA.
They can claim the moral high ground if they like and shift the blame, but the truth is that regardless of WHY the breach happened, it was still a breach and it still happened!
As a software engineer, I believe there's a real argument to be made here that 23 and Me were negligent in their approach. Given the personal nature of data stored they should have enforced MFA from the start, but they did not. They made an explicit decision to choose customer convenience above customer security.
The argument that customers should have made better security decisions is evasive bullshit.
As a software engineer you cannot trust customers to take correct decisions about security. And customers should not be expected to either - they are not the experts! It's the job of IT professionals to ensure that data has an appropriate level of protection so that it is safeguarded even against naive user behaviour.
My mom used 23 and me last year and created an account with 2FA. Their 2FA fucked up and never sent the code. She spent weeks on the phone with customer service but they just shuffled her around. I tried to talk to them but it was just “I’ll escalate this to my manager” and then they’d never call back. Then we tried to get a refund and they refused, so they basically stole 40 bucks from my mom.
They probably never enforced 2FA because they knew it didn’t work and didn’t want to bog down their nonexistent customer service with complaints about their fucked up 2FA. I looked online and my mom wasn’t the only one with this issue. So in that sense, they are responsible IMO.
23 and Me are technically correct in that it’s customer behaviour that caused the issue.
Maybe I don't really understand what happened, but it sounds like 2 different things happened:
The hackers initially got access to around 14,000 accounts using previously compromised login credentials, but they then used a feature of 23andMe to gain access to almost half of the company’s user base, or about 7 million accounts
14k accounts were compromised due to poor passwords and password re-use -
And then they got access to 7 million accounts. Where did that 7 million account breach come from? Were those 7 million connections of the 14k or something? Because I don't think your connections can see many in-dept details
Let's pretend that I had an account and that you used the internal social share to share your stuff with me.
I, being an idiot, used monkey123 as my password. As a result, the bad guys got into my account. Once in my account, they had access to everything in my account, including the stuff you shared with me.
Now to get from 14,000 to 7,000,000 would mean an average of 500 shares per account. That seems unreasonable, so there must have been something like your sharing with me gives me access not just to what you shared, but to everything that others shared with you in some kind of sharing chain. That, at a minimum, is exclusively on 23andMe. There is no way any sane and competent person would have deliberately constructed things like that.
Edit: I think I goofed. It seems to be sharing with relatives as a collection, not individuals. As was pointed out, you don't have to go very far back to find common ancestors with thousands of people, so that's a more likely explanation than mine.
From how I understand it, the 14 000 -> 7 000 000 is caused by a feature that allows you to share your information with your "relatives", i.e. people who were traced to some common ancestor.
I'm still quite on the fence about what to think about this. If you have a weak password that you reuse everywhere, and someone logs into your Gmail account and leaks your private data, is it Google's fault?
If we take it a step further - if someone hacks your computer, because you are clicking on every link imaginable, and the steals your session cookies, which they then use to access such data, is it still the fault of the company for not being able to detect that kind of attack?
Yes, the company could have done more to prevent such an attack, mostly by forcing MFA (any other defense against password stuffing is easily bypassed via a botnet, unless it's "always on CAPTCHA" - and good luck convincing anyone to use it), but the blame is still mostly on users with weak security habits, and in my opinion (as someone who works in cybersecurity), we should focus on blaming them, instead of the company.
Not because I want to defend the company or something, they have definitely done a lot of things wrong (even though nowhere near as wrong as the users), but because of security awarness.
Shifting the blame solely on the company that it "hasn't done enough" only lets the users, who due to their poor security habits caused the private data of millions of users being leaked, get away with it in, and let them live their life with "They've hacked the stupid company, it's not my fault.". No. It's their fault. Get a password manager FFS.
Headlines like "A company was breached and leaked 7 000 000 of user's worth of private data" will probably get mostly unnoticed. A headline "14 000 people with weak passwords caused the leak of 7 000 000 user's worth of private data" may at least spread some awarness.
Ok, that makes much more sense! I've done a tiny bit of genealogy, so I knew about the exponential numbers, but I misunderstood the sharing. Yes, I know the feature was described as "with relatives" but I was thinking of "with person". Yes, choosing to share with all relatives in one click would produce huge numbers.
As for where to place the blame, it's tough. The vast majority of people have no concept of how this stuff works. In effect, everything from mere typing into a document to logging in to and using network resources is treated quite literally as magic, even if nobody would actually use that word.
That puts a high burden on services to protect people from this magical thinking. Maybe it's an unreasonably high burden, but they have to at least make the attempt.
2FA (the real thing, not the SMS mess) is easy to set up on the server side. It's easy enough to set up on the client side that if that's too much for some fraction of your customer base, then you should probably treat that as a useful "filter" on your potential customers.
There are any number of "breached password" lists published by reputable companies and organizations. At least one of those companies (have I been pwned) makes their list available in machine readable formats. At this point, no reputable company who makes any claims to protection of privacy and security should be allowing passwords that show up on those lists. Account setup procedures have enough to do already that a client-side password check would be barely noticeable.
We know enough about human nature and human cognition to know that humans are horrifically bad at creating passwords on the fly. Some services, maybe most services, should prohibit users from ever setting their own passwords, using client-side scripting to generate random strings of characters. Those with password managers can simply log the assigned password. Those without can either write it in their address book or let their browser manage it. This has the added benefit of not needing to check a password against a published list of breached passwords.
My data will always be at risk of some kind of weak link that I have no control over. That makes it the responsibility of each online service to ensure that the weak links are as strong as possible. Rate limiting, enforcement of known good login policies and procedures, anomaly detection and blocking, etc should be standard practice.
You are right, and the company is definitely to blame. But, compared to how usually other breach happens, I don't think this company was that much negligient - I mean, their only mistake was as far as I know that they did not force the users to use MFA. A mistake, sure, but not as grave as we usually see in data breaches.
My point was mostly that IMO we should in this case focus more on the users, because they are also at fault, but more importantly I think it's a pretty impactful story - "few thousand people reuse passwords, so they caused millions of users data to be leaked" is a headline that teaches a lesson in security awarness, and I think would be better to focus on that, instead of on "A company didn't force users to use MFA", which is only framed as "company has been breached and blames users". That will not teach anyone anything, unfortunately.
I'm not saying that the company shouldn't also be blamed, because they did purposefully choose to prefer user experience and conversion rate (because bad UX hurts sales, as you've mentioned) instead of better security practices, I'm just trying to figure out how to get at least something good out of this incident - and "company blames users for them getting breached" isn't going to teach anyone anything.
However, something good did come up out of it, at least for me - I've realized that it never occured to us to put "MFA is not enforced" into pentest findings, and this would make for a great case why to start doing it, so I've added it into our templates.
I agree with everything you've said. One thing that would go a long way to securing accounts would be legislation requiring all government services, banks, and credit unions to implement authenticator-based 2FA. At a minimum.
Those institutions are already very heavily regulated (at least here in Canada), so one more regulation would be meaningless.
With that in place, it would be trivial for everyone else to follow suit, since they'd know that approximately everyone has a second factor and knows how to use it.
Good for you in adding to your testing template. Security is a journey, not a destination, so keeping things up to date is important.
But its not a breach, its accounts being compromised. Yes you can't trust them but its their own fault still. And you can't make it too hard to get the data because otherwise your idiot of a user cant access it either.
any security incident in which unauthorized parties gain access to sensitive data or confidential information, including personal data (Social Security numbers, bank account numbers, healthcare data) or corporate data (customer data records, intellectual property, financial information).
Despite the fact the attackers used real passwords to log in they are still an 'unauthorized party' because they are not the intended party.
It's also legally the case that using a password to access data you know you are not supposed to access still counts as 'hacking'
Well, the authorisation is the password, so from their side it was in fact not a breach because they just got a normal login with the correct authorisation(password).
23 and Me are technically correct in that it's customer behaviour that caused the issue. People reused passwords and didn't use MFA.
They can claim the moral high ground if they like and shift the blame, but the truth is that regardless of WHY the breach happened, it was still a breach and it still happened!
As a software engineer, I believe there's a real argument to be made here that 23 and Me were negligent in their approach. Given the personal nature of data stored they should have enforced MFA from the start, but they did not. They made an explicit decision to choose customer convenience above customer security.
The argument that customers should have made better security decisions is evasive bullshit.
As a software engineer you cannot trust customers to take correct decisions about security. And customers should not be expected to either - they are not the experts! It's the job of IT professionals to ensure that data has an appropriate level of protection so that it is safeguarded even against naive user behaviour.
My mom used 23 and me last year and created an account with 2FA. Their 2FA fucked up and never sent the code. She spent weeks on the phone with customer service but they just shuffled her around. I tried to talk to them but it was just “I’ll escalate this to my manager” and then they’d never call back. Then we tried to get a refund and they refused, so they basically stole 40 bucks from my mom. They probably never enforced 2FA because they knew it didn’t work and didn’t want to bog down their nonexistent customer service with complaints about their fucked up 2FA. I looked online and my mom wasn’t the only one with this issue. So in that sense, they are responsible IMO.
Maybe I don't really understand what happened, but it sounds like 2 different things happened:
14k accounts were compromised due to poor passwords and password re-use -
And then they got access to 7 million accounts. Where did that 7 million account breach come from? Were those 7 million connections of the 14k or something? Because I don't think your connections can see many in-dept details
Let's pretend that I had an account and that you used the internal social share to share your stuff with me.
I, being an idiot, used monkey123 as my password. As a result, the bad guys got into my account. Once in my account, they had access to everything in my account, including the stuff you shared with me.
Now to get from 14,000 to 7,000,000 would mean an average of 500 shares per account. That seems unreasonable, so there must have been something like your sharing with me gives me access not just to what you shared, but to everything that others shared with you in some kind of sharing chain. That, at a minimum, is exclusively on 23andMe. There is no way any sane and competent person would have deliberately constructed things like that.
Edit: I think I goofed. It seems to be sharing with relatives as a collection, not individuals. As was pointed out, you don't have to go very far back to find common ancestors with thousands of people, so that's a more likely explanation than mine.
From how I understand it, the 14 000 -> 7 000 000 is caused by a feature that allows you to share your information with your "relatives", i.e. people who were traced to some common ancestor.
I'm still quite on the fence about what to think about this. If you have a weak password that you reuse everywhere, and someone logs into your Gmail account and leaks your private data, is it Google's fault?
If we take it a step further - if someone hacks your computer, because you are clicking on every link imaginable, and the steals your session cookies, which they then use to access such data, is it still the fault of the company for not being able to detect that kind of attack?
Yes, the company could have done more to prevent such an attack, mostly by forcing MFA (any other defense against password stuffing is easily bypassed via a botnet, unless it's "always on CAPTCHA" - and good luck convincing anyone to use it), but the blame is still mostly on users with weak security habits, and in my opinion (as someone who works in cybersecurity), we should focus on blaming them, instead of the company.
Not because I want to defend the company or something, they have definitely done a lot of things wrong (even though nowhere near as wrong as the users), but because of security awarness.
Shifting the blame solely on the company that it "hasn't done enough" only lets the users, who due to their poor security habits caused the private data of millions of users being leaked, get away with it in, and let them live their life with "They've hacked the stupid company, it's not my fault.". No. It's their fault. Get a password manager FFS.
Headlines like "A company was breached and leaked 7 000 000 of user's worth of private data" will probably get mostly unnoticed. A headline "14 000 people with weak passwords caused the leak of 7 000 000 user's worth of private data" may at least spread some awarness.
Ok, that makes much more sense! I've done a tiny bit of genealogy, so I knew about the exponential numbers, but I misunderstood the sharing. Yes, I know the feature was described as "with relatives" but I was thinking of "with person". Yes, choosing to share with all relatives in one click would produce huge numbers.
As for where to place the blame, it's tough. The vast majority of people have no concept of how this stuff works. In effect, everything from mere typing into a document to logging in to and using network resources is treated quite literally as magic, even if nobody would actually use that word.
That puts a high burden on services to protect people from this magical thinking. Maybe it's an unreasonably high burden, but they have to at least make the attempt.
2FA (the real thing, not the SMS mess) is easy to set up on the server side. It's easy enough to set up on the client side that if that's too much for some fraction of your customer base, then you should probably treat that as a useful "filter" on your potential customers.
There are any number of "breached password" lists published by reputable companies and organizations. At least one of those companies (have I been pwned) makes their list available in machine readable formats. At this point, no reputable company who makes any claims to protection of privacy and security should be allowing passwords that show up on those lists. Account setup procedures have enough to do already that a client-side password check would be barely noticeable.
We know enough about human nature and human cognition to know that humans are horrifically bad at creating passwords on the fly. Some services, maybe most services, should prohibit users from ever setting their own passwords, using client-side scripting to generate random strings of characters. Those with password managers can simply log the assigned password. Those without can either write it in their address book or let their browser manage it. This has the added benefit of not needing to check a password against a published list of breached passwords.
My data will always be at risk of some kind of weak link that I have no control over. That makes it the responsibility of each online service to ensure that the weak links are as strong as possible. Rate limiting, enforcement of known good login policies and procedures, anomaly detection and blocking, etc should be standard practice.
You are right, and the company is definitely to blame. But, compared to how usually other breach happens, I don't think this company was that much negligient - I mean, their only mistake was as far as I know that they did not force the users to use MFA. A mistake, sure, but not as grave as we usually see in data breaches.
My point was mostly that IMO we should in this case focus more on the users, because they are also at fault, but more importantly I think it's a pretty impactful story - "few thousand people reuse passwords, so they caused millions of users data to be leaked" is a headline that teaches a lesson in security awarness, and I think would be better to focus on that, instead of on "A company didn't force users to use MFA", which is only framed as "company has been breached and blames users". That will not teach anyone anything, unfortunately.
I'm not saying that the company shouldn't also be blamed, because they did purposefully choose to prefer user experience and conversion rate (because bad UX hurts sales, as you've mentioned) instead of better security practices, I'm just trying to figure out how to get at least something good out of this incident - and "company blames users for them getting breached" isn't going to teach anyone anything.
However, something good did come up out of it, at least for me - I've realized that it never occured to us to put "MFA is not enforced" into pentest findings, and this would make for a great case why to start doing it, so I've added it into our templates.
I agree with everything you've said. One thing that would go a long way to securing accounts would be legislation requiring all government services, banks, and credit unions to implement authenticator-based 2FA. At a minimum.
Those institutions are already very heavily regulated (at least here in Canada), so one more regulation would be meaningless.
With that in place, it would be trivial for everyone else to follow suit, since they'd know that approximately everyone has a second factor and knows how to use it.
Good for you in adding to your testing template. Security is a journey, not a destination, so keeping things up to date is important.
But its not a breach, its accounts being compromised. Yes you can't trust them but its their own fault still. And you can't make it too hard to get the data because otherwise your idiot of a user cant access it either.
They should definitely force 2FA however.
IBM defines "Data Breach" as:
Despite the fact the attackers used real passwords to log in they are still an 'unauthorized party' because they are not the intended party.
It's also legally the case that using a password to access data you know you are not supposed to access still counts as 'hacking'
Well, the authorisation is the password, so from their side it was in fact not a breach because they just got a normal login with the correct authorisation(password).
Potato potato