Cloud-Ready Security key for UK Cancer Charity

Published on
03/04/2023 02:40 PM
a UK health care services and support provider delivers a cloud-ready security bubble around users, their activities, and the sensitive data they share

The next BriefingsDirect security innovations discussion examines how Macmillan Cancer Support in the United Kingdom (UK) places the ease of use and sense of security in the services provided as a top IT -- and community service -- requirement.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share their story on how to develop and deliver a cloud-ready security bubble around all users, their activities, and the sensitive data they share is our guest, Tim O’Neill, Head of Information Security at Macmillan Cancer Support in London. The interview is moderated by  Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, tell us about Macmillan Cancer Support. It’s a very interesting and worthy organization. I’d also like to hear about your approach to securing a caring and trusted environment for your community. 

O'Neill: We have a unique organization in that when people think of a cancer charity, they often think about the medical side of it and about the pioneering of new treatments. Every day there’s something new in the media about how swallowing a leaf a day will keep a cancer away, and things like that.

But we deal with the actual effects of having cancer. We help anyone who is affected by cancer. That can be a person who’s just had a cancer diagnosis. That can be the family of someone who has a diagnosis, or their employer, or other employees. Anyone who is affected by cancer can come to us for help.

No alt text provided for this image

O'Neill

We don’t do a lot in the medical sphere, such as creating new treatments or testing or anything like that. We’re here to look after the impacts that cancer has on your life. We help with the patient’s pathway; we help you understand it and what the implications are – and what might happen next.

We will help you financially if you need help. We believe that nobody should be cold or hungry because of a cancer diagnosis. We provide the cancer nurses who exist in UK hospitals. We train them and we fund them. We have specialist care centers. Again, we fund those. Our psychological care is done through a third party as well. But we manage that, we fund it, we maintain it. We also have an arm that lobbies the government. So, for example, in the UK we had cancer reassigned as a disability.

This means that as soon as you have a cancer diagnosis, you are legally recognized as disabled, and you have all the benefits that go along with that. The reason for that is that once you’ve had a cancer diagnosis, it affects the rest of your life. It does not matter if it’s gone into remission. It will still affect you.

The treatments are invasive. They affect you. We work in many spheres, and we have a lot of influence. We work with a lot of partners. But the fundamental core of what we do is that you can contact Macmillan when you need help.

Gardner: And to foster that level of support, to provide that trusted environment across the full experience, having six levels of authentication to jump through -- or not seeing your e-mails delivered properly -- can stop the entire process.

O’Neill: Oh, absolutely. And we have to be realistic here. We are talking at times of people phoning us at the worst moment of their lives. They’ve just had something come out of the blue or the treatments have gone badly, or they’ve had to have that horrible conversation with their loved ones. And it’s at that very point when they need to talk to us.

We have to be accessible exactly when people need us. And in that instant, we can be the difference between them having a completely honest open, and frank conversation -- or having to sit and suffer in silence.

Asking them, “Oh, can you go and grab your mobile phone? Yeah, and stick your fingerprint on there, and now that password was not recognized. You need to change it. And by the way, sorry, that password didn’t have quite as many exclamation marks as we need. And so, now if you’d like to turn on your webcam and log in using a photo, then we’ll let you in.”

You can’t do that. We have to be accessible exactly when people need us. And in that instant, we can be the difference between them having a completely honest, open, and frank conversation -- or having to sit and suffer in silence.

Gardner: Well, I don’t envy you your position, Tim. On one hand, you have sensitive healthcare and patient data that you need to protect. On the other hand, you need to make this a seamless and worthwhile experience.

How do you reach that balance? What have been some of the challenges that you’ve faced in trying to provide that proper balance?

Keep everyone secure by managing risk 

O’Neill: Everything is risk-based. We look at where you normally phone in from, or if you’re a first-time caller, or “Are you in a location that we trust?” “Are you in a number range that we trust?” Things like that. What’s the nature of the conversation you’re having with us?

There are a number of parameters. Not everything is a high-level risk if you are just phoning us, and you simply want to talk. If you don’t want to impart any special information or anything like that, then the risk is low. Everything is measured against risk, which is a mentality change in the organization.

And, you know, I’ve been in conversations where people say to me, “I don’t like that idea … I think somebody got it wrong” without quantifying the risk. It’s not good enough.

But if we understand exactly what the risks are, then we can understand what controls can mitigate those risks. We can choose the effective controls for mitigating the risks. And then we can take the actions and do the tasks to enable those controls.

No alt text provided for this image

For example, with multi-factor authentication (MFA), if your workforce is five people working from one office and you have no remote connections, that’s potentially the wrong security control. Your controls could be completely different. They will have the same effect, but they will have a more positive impact on the end-user experience. 

That’s the narrative change that you have to have. One of the most challenging things, when I first came into the organization, is when we were transforming IT systems. We were starting to understand how people wanted to interact with us digitally.

Historically, our interactions had been very much face-to-face, or through phone calls as well. And with COVID, obviously, all of a sudden, all of our interactions changed. So, it became, “How do we make it so that the legacy IT systems, users, and accounts can be migrated to new, safe methods without getting rid of the history of conversations they wanted to keep?” We didn’t want to lose the knowledge that we had and the relationships we had created with these individuals.

If you’re sending emails out to people saying, “Oh, we need you to change your log-on credentials because we’ve moved to this new IT system, et cetera, et cetera.” … If that person is sadly deceased -- we’re talking about cancer here -- then potentially sending something like that to their family is not great. So, there are lots of things to consider.

Gardner: It sounds like you’re approaching this from a fit-for-purpose security approach and then grading the risk and response accordingly. That sounds very good in theory, but I suspect it’s more complicated in practice and execution. So how, with a small security team such as yours, are you able to accommodate that level of granularity and response technically?

O’Neill: Everything starts complex. Every concept that you have starts off with a million boxes on the screen and loads of lines drawn everywhere. And actually, when you come down to it, it becomes a lot simpler.

When we get to the bottom level of this: What are the risks that we are trying to mitigate here? We are trying to mitigate the fundamental risk that an individual’s information may end up with the wrong person. That’s the most important risk that we’re trying to manage.

Start off complex, and then bring it all down to the simplest level, and focus on the one thing that actually matters, which is the risk.

And bear in mind that people will tell us about their cancer diagnosis before they’ve even spoken to their family, friends, … anyone. And they will phone us at the darkest moments and talk about suicidal thoughts. Those are conversations that you do not want anyone else to have visibility into.

When we get to such a stage that we are entering into something problematic on privacy or risk, at that point, we will do extra validations. Again, it’s all based around the particular risk. You have your conditional access element risk whereby you’re looking at where people are coming from. You’re looking at historical interactions from that location and you’re extrapolating that information to have a choice made automatically based on it.

But then you’re also talking about training of individuals where they don’t need to go through vetting questions at the start of conversations but once they get to a point where the nature of it changes, and the data risk of that conversation changes, at that point controls need to be applied.

Start off complex, and then bring it all down to the simplest level, and focus on the one thing that actually matters, which is the risk. 

Gardner: Well, at the same time as you’ve been embracing this model of risk-balancing, you’ve also faced a movement over the past several years to more cloud-ready, cloud-native environments. And that means that you can’t just rely on programmatic web application firewalls (WAFs) or creating a couple of filtering rules for your network.

So, how do we move securely toward such a cloud or mixed environment? How is that different from just building a security perimeter? Previously, you’ve mentioned to me a “security bubble.”

Remain flexible inside your security bubble

O’Neill: The new models are different in a number of ways. What’s historically happened with information security is somebody says, “I have this new system.” Then you ask, “What’s the system? What’s the risk? What are you doing with it? Where is the data going?”

And so, you designed the security around that system – but then you get a new system. Is that one okay? Well, then you design a new bit of security. You end up with a set of tools that you apply to each one. It’s slow, and it’s prone to failure because people design the system first and its uses change. It can also lock the organization in.

If we take an incredibly simple thing, which is the storage of data, an organization might say, “We’re an Amazon Web Services (AWS) cloud house.” Wherein it’s your house, but as we mature with these cloud strategies, people are going to start leveraging economy of cost of storage by moving their data dynamically to the less expensive storage locations. And when one cloud storage offering is cheaper than another, then your data will fly across to that.

We can’t work in the old way anymore within cyber security and information security. What we have to do is create this security bubble that we’ve been talking about. It allows the organization the flexibility to change the security strategy.

For example, every year or two, we suddenly go, “There’s a new threat. Here it comes.” Yet every threat works in fundamentally the same way: You have to get in, you have to get the rights to see what you’re doing, and you have to be able to move around. If you break it down to those basics, that’s what everything in security needs to do, really.

If we can start to move to this bubble, to say, “We know what our data is, we know who our users are, and we know who they’re going to interact with.” Then we can allow people and organizations the flexibility to do what they want and only block the high-risk events within that.

 If your data leaves the bubble, and it’s just, “Hey, do you want a cup of tea?” kind of communication, obviously you’re not going to worry about that. If it’s something that contains risky data, then we’ll worry about that. We’ll block that.

But we have to stop thinking about application-level security and start thinking a lot and more strategically about security. We may have to stop and ask the business, “Where are you going? What are you doing?” But they don’t know yet. And also, as COVID has shown us, sometimes nobody knows where we’re all going.

 Gardner: Right. We need to be prepared for just about anything and also be able to react quickly, so you can’t be knee- and react to every app or system differently. As you point out, you need to be strategic.

And so, part of being strategic, for an organization such as yours, because you’re supported by donations; you’re a non-profit -- you need to be cost-efficient as well. So again, it’s a balancing act between cost efficiency and being strategic about security. How is that something you’ve been able to manage?

A wise spend supports smart security

 O’Neill: Well, I don’t believe they’re in conflict. If we look at organizations -- I won’t name them, that are huge and have very big budgets, who spend tens of millions on their cyber security – they have huge teams, and they still get breached. The amount that you spend doesn’t necessarily create a graph to greater security.

Spending intelligently does, and it all comes from focusing on risks. If you sit there and you say, “You know what we have to do, we have to go through the top 20 NIST or CIS methods or recommendations,” or whatever, “and we’re going to supply the best product on the market for each of those, and check the box.”

Firstly, you potentially throw a load of money away because in the end you don’t actually need it all. The spec says, “Oh, you need MFA and a WAF.” Well, actually, it’s not an MFA that you need, it’s not a WAF that you need.

What are the risks that those products are mitigating? And then, what is the best way to mitigate the product risks? It all comes down to that, when you sit back and you look at what we do for a living in information security. 

No alt text provided for this image

We talk a lot about burnout in information security and wellness. It’s because people keep chasing their tails. Every day, there’s a new headline about a breach or a new zero day or a new technique -- or whatever it may be -- and everyone starts worrying about it. What do we do to protect against this?

But it’s about assessing the risk. And from a risk perspective, all the rest of it stays the same to a certain degree. It’s very rare that a new zero day fundamentally changes your risk.

Gardner: You bring up an interesting point. Not only are you concerned about the comfort and sense of security for your end users, but you also need to be thinking about your staff. The people that you just mentioned who are increasingly facing burnout.

Throwing another tool at them every three months or asking them to check off 16 more boxes every time a new system comes online, it’s going to be averse to your overall security posture. Is there something you look for on how you tackle this that’s also accommodating the needs of your security staff?

Monitor what matters

O’Neill: You’ll have to ask them -- but they all still have their hair. Yeah, organizations often talk about insider threats. I think it’s a terrible thing to be talking about because it’s such a small percentage. A lot of organizations treat their employees as part of the problem, or almost an enemy that needs to be monitored constantly. I don’t care if you’re on Facebook at all.

I care if you’re trying to download something malicious from Facebook or upload something like that to Facebook. But the fact that you’re on Facebook is a management issue, not a cybersecurity issue. We do not monitor things that we do not need to monitor.

For example, we were getting a weekly report from one of our security products. It was typically a 14-page report that basically patted itself on the back by saying how great it had been. “This is everything I’ve blocked,” it said. And a member of my team was spending pretty much a day going through that report. Why? What possible gain came from looking at that report?

I care if you're trying to download something malicious from Facebook. But the fact that you're on Facebook is a management issue, not a cybersecurity issue. We do not monitor things that we do not need to monitor. 

The real question is … Once you read the report, what did you do with the information? “Nothing, it was interesting.” “But what did you do with the interesting part? “Well, nothing.” Then don’t do it. Everything has to have a purpose. Even to the smallest degree. I had a meeting this morning about policies. Our acceptable use policy document is, I think, 16 pages long.

Come on. It doesn’t need to be 16 pages long. I want two pages, tops. “Do this, don’t do that, or absolutely don’t do this.”

We have a mobile device policy that everyone has to sign up to. … We have a mobile device manager. You can’t connect to systems unless your operating system is up to date, all of this sort of stuff. So why have we got a policy that is seven pages long?

Say what you can and can’t do on mobile devices. Then all we need to say is, “You’ll have to adhere to the policies.” All of a sudden, we’re making everyone’s life easier. Not just the information security teams, but the normal end users as well.

It is all about working out what’s actually valid. We’re very good in information security of doing things because that’s what we’ve done instead of thinking.

Gardner: I’m hearing some basic common threads throughout our discussion. One is a fit-for-purpose approach, sort of a risk-arbitrage approach, simplicity whenever possible, and increasingly having the knobs to dial things up and down and find the proper balance.

To me, those increasingly require a high level of analysis and data, and a certain maturity in the way that your platforms and environment can react and provide you what you need.

Tell me a little bit about that now that we’ve understood your challenges. How did you go about a journey to finding the right solutions that can accommodate those security analysis and strategy requirements of granularity, fit-for-purpose, and automation?

Streamline your team’s efforts

O’Neill: When we go to market for a security product, usually we’re looking at a specific issue that we’re trying to fix and control. A lot of the products will do the job that you want them to do.

But there are a few other things we look for. Can my team log into it and very quickly see what is important? Can we go from seeing that to the action that needs to be taken? How quick is that journey?

When somebody is demonstrating the platform, for me, my question is always, “How do I get from seeing it to knowing that it’s actually something I need to do, to then being able to do something about it?” That journey is important. Loads of products are brilliant, and they have a pretty interface, but then they fall apart underneath that.

And, the other thing is, a lot of these platforms produce so much information, but they don’t give it to you. They focus on just one element. What value-add can I get that the product might not deliver as a core element, but that actually enables me to easily tick off my other boxes as well?

No alt text provided for this image

Gardner: Can you describe what you get when you do this right? When you find the right provider who’s giving you the information that you need in the manner you need it? Are there some metrics of success that you look for or some key performance indicators (KPIs) that show you’re on the right track?

O’Neill: It’s always a bit difficult to quantify. Somebody asked me recently how I knew that the product we were using was a good one. And I said, “Well, we haven’t been breached since using it.” That’s a pretty good metric to me, I think, but it’s also about my team. How much time do they have to spend on this solution? How long did it take to get what you needed?

We have an assumed-breach mentality, so I expect the first job of the day is to prove to me that we have not been breached. That’s job one. Next, how quickly can you tell me that from the time you turn your computer on? How much of the time do you end up looking at false positives? What can the product do every day that helps us get a bit better? How does that tool help us to know what to do?

Gardner: We began our discussion today by focusing on the end user being in a very difficult situation in life. Can we look to them, too, as a way of determining the metrics of success? Have you had any results from the user-experience perspective that validate your security philosophy and strategy?

Inspect end-user behavior, feedback

O’Neill: Yes. Obviously, we interact constantly with the people that we support and look after. It is the only reason we exist. If I do anything that is detrimental to their experience, then I’m not doing my job properly.

We go back and we do ask them. I personally have spent time on phone lines as well. I don’t sit within my little security bubble. I work across the organization. I’ve been on the streets with the bucket collecting donations.

We have very good relationships with people that we have supported and continue to support. We know because we ask them how it felt for them. What works for them, what doesn’t work for them? We are continually trying to improve our methods of interaction and how we do on that. And I’m constantly trying to see what we can do that makes that journey even easier.

We also look at user behavior analytics and the attack behavior analytics on our websites. How can we make the experience of the website even smoother by saying, “We’re pretty sure you are who you say you are.” Are they going to the same places? Are you changing your behavior?

And I can understand the behaviors and even how people type. People use their keyboards differently. Well, let’s look at that. What else can we do to make it so that we are sure we are interacting with you without you having to jump through a million hoops to make sure that that’s not the case?

Gardner: You mentioned behavior and analytics. How are you positioning yourself to better exploit analytics? What are some of your future goals? What are the new set of KPIs a few years from now that will make you even more strategic in your security posture?

Analytics to lessen user interruptions

O’Neill: That’s a really good question. The analysis of user behavior linked to attack behavior – that and analysis of many other elements is going to become increasingly important for smoothing this out. We can’t keep using CAPTCHA, for example. We can’t keep asking people to identify fire hydrants that are within 30 centimeters of a dog’s leg. It’s absurd.

We have to find better ways of doing this to determine the true risk. Does it matter if you’re not who you say you are until we get to the point that it does? Because, actually, maybe you don’t want to be who you are for a period of a conversation. Maybe you actually want to be someone else, so you’re disassociating yourself from the reality of the situation. Maybe you don’t want to be identified. Do we have to validate all of the time?

I think these are questions we need to be asking. I think the KPIs are becoming a lot more difficult. You have to base them around, “Did we have any breaches?” And I think with breaches we separate our information governance from the information security, but they’re brothers from one another, aren’t they?

We have to find better ways to determine the true risk. Does it matter if you're not who you say you are until we get to the point that it does? Do we have to validate all of the time? These are questions we need to be asking.

The information governance leak shouldn’t happen with good information cyber security, so we should expect to see a lot fewer incidents and no near misses. With the best interaction KPIs, we should be seeing people get in touch with us a lot quicker, and people should be able to talk to the right people for the right reason a lot quicker.

Our third-party interaction is very important. As I said, we don’t offer any medical services ourselves, but we will pay for and put you in touch with organizations that do. We have strategic partnerships. To make that all as smooth as possible means you don’t need to worry who you’re talking to. Everything is assured and the flow is invisible. That kind of experience -- and the KPIs that matter the most for delivering that experience – provides well for the person who needs us.

Gardner: Any closing advice for those who are moving from a security perimeter perspective toward more of a security bubble concept? And by doing so, it enables them to have a better experience for their users, employees, and across their entire communities?

Dial down the panic for security success

O’Neill: Yes. This is going to sound a bit odd, but one of the most important things is to conceptualize, and to take the time, to challenge my team. What is the gold standard? What is the absolute? If we had all the money in the world and everything worked, what the perfect journey? Start from there and then bring it down to what’s achievable or what elements of it are achievable. 

I know this sounds odd but stop panicking so much. None of us think well when we’re panicked. None of us think well when we’re stressed. Take the time for yourself. Allow your team to take the time for themselves. Allow their brains the freedom to flow and to think.

And we’ve got to do what we do better. And that means we have to do it differently. So, ask questions. Ask why do we have endpoint protection? I’ve got this, I’ve got that, I’ve got all these other things. Why have we got something on every endpoint, for example? Ask that question. 

Because at least then you have validated what it is truly for and better know the amount of value it has, and therefore the proper amount of effort it needs. Stop doing things just by ticking off boxes. Because as an ex-hacker, let’s call it, I know the boxes that you tick. You tick all those boxes; I know how to bypass those boxes. So, yeah, just take time, think, conceptualize, and then move down to reality. Maybe.

Gardner: Be more realistic about the situation on the ground, rather than just doing things because that’s the way they’ve always been done?

O’Neill: Yes, absolutely. Understand your risk. Understand what you are actually having to support. The fortress approach doesn’t work anymore. The proliferation of software as a service (SaaS) application, the desire to allow everyone to perform to their best within and outside of an organization – that means allowing people flexibility to work in a way that best suits them. And you cannot do that with a fortress.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now