![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
In a truly "unprecedented" event, Google *POOF*ed the cloud-hosted data for "UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members". ALL DATA. GONE.
Now, lest we think that UniSuper wimped out on their infrastructure design with their Google cloud hosting, they didn't. They paid for their hosting to be properly backed up and to be geographically diversified.
And somehow Google managed to wipe out all of it. Data: gone. Backups: gone.
The details of how Google threw all of this into the bit bucket is unknown.
The saving grace is that UniSuper was smart to diversify their cloud hosting and also hosted with a second provider, and were able to reestablish their infrastructure through recovering from their second hosting provider. However, it cost them two weeks of downtime, plus incremental restoration and account transaction processing. So not only was their IT team stressed beyond belief, but their customer service team were constantly having to tell people what was going on and having to answer why account balances were not correct.
This is supposed to be utterly impossible, they were also quite emphatic that this was not a hacking event. Additionally, most systems should be doing what's known as 'soft deletes' where if something is deleted, it just appears to have gone away and is invisible to the outside world but is still recoverable. Apparently this was all gone gone. Thus far no other cloud services provider is crowing that this won't happen if you switch to us, because clearly no one thought it could happen at Google until it did.
Google is quite sincere about saying 'steps being put in place to prevent this from happening again', what about explaining in somewhat abstracted terms what allowed it to happen in the first place? That would make the IT administrators in the world a lot happier!
https://arstechnica.com/gadgets/2024/05/google-cloud-accidentally-nukes-customer-account-causes-two-weeks-of-downtime/
Now, lest we think that UniSuper wimped out on their infrastructure design with their Google cloud hosting, they didn't. They paid for their hosting to be properly backed up and to be geographically diversified.
And somehow Google managed to wipe out all of it. Data: gone. Backups: gone.
The details of how Google threw all of this into the bit bucket is unknown.
The saving grace is that UniSuper was smart to diversify their cloud hosting and also hosted with a second provider, and were able to reestablish their infrastructure through recovering from their second hosting provider. However, it cost them two weeks of downtime, plus incremental restoration and account transaction processing. So not only was their IT team stressed beyond belief, but their customer service team were constantly having to tell people what was going on and having to answer why account balances were not correct.
This is supposed to be utterly impossible, they were also quite emphatic that this was not a hacking event. Additionally, most systems should be doing what's known as 'soft deletes' where if something is deleted, it just appears to have gone away and is invisible to the outside world but is still recoverable. Apparently this was all gone gone. Thus far no other cloud services provider is crowing that this won't happen if you switch to us, because clearly no one thought it could happen at Google until it did.
Google is quite sincere about saying 'steps being put in place to prevent this from happening again', what about explaining in somewhat abstracted terms what allowed it to happen in the first place? That would make the IT administrators in the world a lot happier!
https://arstechnica.com/gadgets/2024/05/google-cloud-accidentally-nukes-customer-account-causes-two-weeks-of-downtime/
no subject
Date: 2024-05-18 11:21 pm (UTC)no subject
Date: 2024-05-19 12:22 am (UTC)Yeah. UniSuper did a good implementation: not trusting all their data to only one cloud provider, and still got nailed with two weeks of downtime! I don't have a lot of faith in putting IT solutions in the cloud, they may be fine for establishing off-site backups, but they still have to be tested.
no subject
Date: 2024-05-18 11:43 pm (UTC)Hugs, Jon
no subject
Date: 2024-05-19 12:22 am (UTC)Very much so.
no subject
Date: 2024-05-19 05:45 pm (UTC)Google, on the other hand ...
no subject
Date: 2024-05-19 06:12 pm (UTC)I left IT as virtualization and cloud services were just beginning their meteoric rise. And frankly I'm glad. Yes, lots of advantages. But I find it very hard to trust them. I feel a lot safer knowing that I can walk into a server room and point to a box and say "There! That is where your data resides." Now, maybe it's replicated and backed up to multiple clouds, but all your compute is done in-house and right there. But I always worked for state/local governments, so never had the need for really big installs/data centers. Completely different scale of ops.
no subject
Date: 2024-05-20 12:01 am (UTC)no subject
Date: 2024-05-20 12:23 am (UTC)Sadly, that is a very real possibility. Most corps go with multi-site backups, but it's questionable as to how well they test their recovery plans. A friend of mine is an IT manager (one of many) for a major insurance company. They do disaster recovery drills twice a year, and for as long as he's been with him, which has been in excess of 20 years, they've never had one that went 100% according to plan.
no subject
Date: 2024-05-21 12:15 am (UTC)no subject
Date: 2024-05-21 06:05 pm (UTC)