British Library Cyber Attack

Monty Pigeon

Striker
There was a major ransomware attack on the British Library in October. It got a bit of coverage at the time, but hasn't had a great deal of coverage in the British media since.

Turns out to have been a lot more significant than we'd previously been told, and has crippled the on-going research of thousands of regular users of the library. Not just online services are affected, but also the bulk of the physical collections, which are accessed through the digital database. It's expected to be out of action until Easter at the earliest.

A huge portion of our national heritage is currently inaccessible, and yet the main reporting is from foreign outlets.


"The effect on the B.L. has been traumatic. Its electronic systems are still largely incapacitated. When I visited the library last Monday, the reading rooms were listless and loosely filled. “It’s like a sort of institutional stroke,” Inigo Thomas, a writer for the London Review of Books, told me."

It could be just the start. Surely merits a lot more coverage.

 


There was a major ransomware attack on the British Library in October. It got a bit of coverage at the time, but hasn't had a great deal of coverage in the British media since.

Turns out to have been a lot more significant than we'd previously been told, and has crippled the on-going research of thousands of regular users of the library. Not just online services are affected, but also the bulk of the physical collections, which are accessed through the digital database. It's expected to be out of action until Easter at the earliest.

A huge portion of our national heritage is currently inaccessible, and yet the main reporting is from foreign outlets.


"The effect on the B.L. has been traumatic. Its electronic systems are still largely incapacitated. When I visited the library last Monday, the reading rooms were listless and loosely filled. “It’s like a sort of institutional stroke,” Inigo Thomas, a writer for the London Review of Books, told me."

It could be just the start. Surely merits a lot more coverage.

I don't think there has been many new developments to report on.

I follow a number of cyber security news sources and there has not been much about it there. That article you posted is one of the few new stories I've seen. Nothing new from the technical point but gives a very good reality of the human aspect of it all.
 
Yup. We’ve been in pretty regular contact with them. We’ve taken local action against their email domain too.
They need to sack their cyber and infrastructure teams, with decent backups they would have been back up and running in weeks.

I heard services could be unavailable for up to a year.
Of course 😊 the apt group have been inside the BL for months and months.
 
Last edited:
There was a major ransomware attack on the British Library in October. It got a bit of coverage at the time, but hasn't had a great deal of coverage in the British media since.

Turns out to have been a lot more significant than we'd previously been told, and has crippled the on-going research of thousands of regular users of the library. Not just online services are affected, but also the bulk of the physical collections, which are accessed through the digital database. It's expected to be out of action until Easter at the earliest.

A huge portion of our national heritage is currently inaccessible, and yet the main reporting is from foreign outlets.


"The effect on the B.L. has been traumatic. Its electronic systems are still largely incapacitated. When I visited the library last Monday, the reading rooms were listless and loosely filled. “It’s like a sort of institutional stroke,” Inigo Thomas, a writer for the London Review of Books, told me."

It could be just the start. Surely merits a lot more coverage.


EThOS still lost...half a million doctoral thesis titles, and many full text versions of the theses themselves. About 98% of all PhDs ever awarded in the UK in there, and a valuable research source.
 
They need to sack their cyber and infrastructure teams, with decent backups they would have been back up and running in weeks.

I heard services could be unavailable for up to a year.
Probably more likely that the cyber and infrastructure teams knew where weaknesses were, have been banging on about it to management for years, but failed to get sufficient funding or senior buy-in for the hard decisions that need to be made.

Library systems tend to be terrible and dated pieces of software and always fill me with dread just from my experience with them. It always feels like the systems are developed by librarians turned programmer and not by IT people.

Then there is the general culture. Libraries want to be nice and helpful. Lets give everyone an account. No you can’t withdraw that old system or close those obviously unused accounts, someone might need them.

I’m busy writing a cyber security incident response plan for a university and page one is about managing staff stress, burn out and creating a no blame culture, based on the advice of people who have been through serious incidents. Firing your IT staff, who know the systems to get back is the worst thing you can do because you lose all business knowledge. The second worst is have them doing a round the clock recovery job fearing they will be sacked or reprimanded because that leads to increased fatigue, sickness, mistakes and slower recovery.

A common technique of hackers is to play on this and wait until things are looking bright and hit again.

Every cyber post incident review session I have been to, it has been said the failing was always at an institutional level. What you just said is completely wrong.
 
Every war.
Battle.
Conquest.
Over the history of time so much has been destroyed and lost its scary imo.
There is so much we dont know its sad imo...lost to the ages.
 
Probably more likely that the cyber and infrastructure teams knew where weaknesses were, have been banging on about it to management for years, but failed to get sufficient funding or senior buy-in for the hard decisions that need to be made.

Library systems tend to be terrible and dated pieces of software and always fill me with dread just from my experience with them. It always feels like the systems are developed by librarians turned programmer and not by IT people.

Then there is the general culture. Libraries want to be nice and helpful. Lets give everyone an account. No you can’t withdraw that old system or close those obviously unused accounts, someone might need them.

I’m busy writing a cyber security incident response plan for a university and page one is about managing staff stress, burn out and creating a no blame culture, based on the advice of people who have been through serious incidents. Firing your IT staff, who know the systems to get back is the worst thing you can do because you lose all business knowledge. The second worst is have them doing a round the clock recovery job fearing they will be sacked or reprimanded because that leads to increased fatigue, sickness, mistakes and slower recovery.

A common technique of hackers is to play on this and wait until things are looking bright and hit again.

Every cyber post incident review session I have been to, it has been said the failing was always at an institutional level. What you just said is completely wrong.
Nail. On. Head.
 
Probably more likely that the cyber and infrastructure teams knew where weaknesses were, have been banging on about it to management for years, but failed to get sufficient funding or senior buy-in for the hard decisions that need to be made.

Library systems tend to be terrible and dated pieces of software and always fill me with dread just from my experience with them. It always feels like the systems are developed by librarians turned programmer and not by IT people.

Then there is the general culture. Libraries want to be nice and helpful. Lets give everyone an account. No you can’t withdraw that old system or close those obviously unused accounts, someone might need them.

I’m busy writing a cyber security incident response plan for a university and page one is about managing staff stress, burn out and creating a no blame culture, based on the advice of people who have been through serious incidents. Firing your IT staff, who know the systems to get back is the worst thing you can do because you lose all business knowledge. The second worst is have them doing a round the clock recovery job fearing they will be sacked or reprimanded because that leads to increased fatigue, sickness, mistakes and slower recovery.

A common technique of hackers is to play on this and wait until things are looking bright and hit again.

Every cyber post incident review session I have been to, it has been said the failing was always at an institutional level. What you just said is completely wrong.
Sounds like we're largely doing the same stuff, and there will be a playbook going here over the years that you just know is the case. The internal teams have probably been pissing in the wind for years shouting for resilience and redundancy and offsite and modernised BC/DR capabilities. And not listened to. They'll have someone very likely not with relevant background either technically or in business making the calls on a purely financial basis, ticking the boxes with policies written on the back of a fag packet that aren't known, understood or practiced by anyone and certainly never tested - but they've been able to say to someone further up the chain or on the audit and insurance forms that 'yes, we do have these policies and processes'. Heads very likely should roll eventually, but as bad as things are, when an incident is underway, you drop the blame game and get the people who know the systems and have worked with the systems as involved as possible. As tempting as it is, those people are the ones who can get you out of the hole hopefully. But even if you can't get out of the hole, those lower down at the coalface are probably the ones telling the senior teams what they should have been doing. For years.

We always tell our customers and prospective customers one thing. You can spend a fortune trying to stop it from happening. But you need to spend just as much if not more on making sure you can come back from it when it does happen.
 
Sounds like we're largely doing the same stuff, and there will be a playbook going here over the years that you just know is the case. The internal teams have probably been pissing in the wind for years shouting for resilience and redundancy and offsite and modernised BC/DR capabilities. And not listened to. They'll have someone very likely not with relevant background either technically or in business making the calls on a purely financial basis, ticking the boxes with policies written on the back of a fag packet that aren't known, understood or practiced by anyone and certainly never tested - but they've been able to say to someone further up the chain or on the audit and insurance forms that 'yes, we do have these policies and processes'. Heads very likely should roll eventually, but as bad as things are, when an incident is underway, you drop the blame game and get the people who know the systems and have worked with the systems as involved as possible. As tempting as it is, those people are the ones who can get you out of the hole hopefully. But even if you can't get out of the hole, those lower down at the coalface are probably the ones telling the senior teams what they should have been doing. For years.

We always tell our customers and prospective customers one thing. You can spend a fortune trying to stop it from happening. But you need to spend just as much if not more on making sure you can come back from it when it does happen.
A phrase one former colleague came up with that I really like is: Hundreds of teams of hackers can try to get in millions of times and they can afford millions of failures. We are a single team defending and if we fail just once it is very serious.
 

Back
Top