by Suff » 30 May 2017, 09:34
I noticed, just before the financial crisis, that if you wanted to get into certain banking jobs as a contractor you had to have prior experience. These areas were the ones which collapsed.
I have noticed, recently, that contract jobs in the Airline industry now insist that you have prior airline experience. They simply won't touch you if you do not.
Given the Airline failures around the globe, that speaks a certain message to me.
As for outsourcing, you can do it well or you can do it badly. RBS did two stupid things in one go. First they deployed a software update that the developers insisted was not fully tested and did it only one week after having passed over the management of the data and the data recovery to Infosys. Of course we know the end result of that, the software failed and when the Indians screwed up the roll back, RBS wound up in an unknown position and were unable to either take payments in or make payments out... For more than a week! As they coded to fix the problem.
When I worked at Linde Gas they had outsourced to T-Systems. One Monday I logged on to find my mail server down. I found that a data storage tech had logged into one storage system on the SAN in that data centre and deleted all the drive volumes from one of the clustered mail servers. Before the backup ran. The same engineer, within 30 minutes, logged into the second storage system, in the second data center 10km away and deleted all of the drive volumes for the second mail server in the cluster. Also before the backup ran. Then, when they went to recover it, they restored with the wrong option and after mail began to arrive on the server, but not get to people's mailboxes on their PC's and laptops, they had to take the whole system offline, keep the restored mailboxes with the new mail, restore the original mailboxes properly then the Linde guys (not T-Systems), had to copy across all new mail between the first restart and the second correct one.
Back in 2003 the used the one power feed in, not a feed from both supplies. The generators kicked in to maintain the system but nobody phoned the support company to come and ensure that the fuel tanks were full and that they remained full. All systems crashed, hard, overnight.
These are just the incidences of idiocy that I've seen or had reported to me directly. It does not surprise me that BA has had this problem, it's almost certain that they let the people with the knowledge leave the company during the outsourcing then, when things go badly wrong, find out all the things they didn't have documented and didn't think to ask about during the handover, when the system failed.
The outcome of the BA failure is that everything will be properly documented. This time. And for the next 5-10 years that documentation will stand them in good stead until it happens again and they have not updated that documentation and all the new stuff is at risk again. Companies who don't do due diligence with their documentation tend to do it in spurts as problems arise, then leave it to moulder over the years.
I have seen quite a large chunk of jobs turn up in the security and identity space over the last weeks. I'm sure we'll see some airline jobs coming up fairly soon too.
Good disaster recovery is not about backing up data or making provisions to "survive" a disaster. Well that's part of DR but it is not what Good DR is about. Good DR is all about the DR testing, the backup recovery testing, the procedures, the processes, the staff knowledge and the constant little disruptions required to validate the DR scenarios and the way the system responds to it.
BA is a global 24x7 business. It has to be online 24x7x366. As such DR testing is always under pressure. Now they know whey their DR specialists were always on their case.
There are 10 types of people in the world:
Those who understand Binary and those who do not.