[00:00:00] Chris Sienko: Every week on Cyber Work, listeners ask us the same question. What cyber security skills should I learn? Well try this, go to infosecinstitute.com/free to get your free cybersecurity talent development eBook. It’s got in depth training plans for the 12 most common roles including SOC analyst, penetration tester, cloud security engineer, information risk analyst, privacy manager, secure coder and more. We took notes from employees and the team of subject matter experts to build training plans that align with the most in demand skills. You can use the plans as is or customize them to create a unique training plan that aligns with your own unique career goals. One more time, just go to infosecinstitute.com/free or click the link in the description to get your free training plans plus many more free resources for Cyber Work listeners. Do it. infosecinstitute.com/free. Now, on with the show.
Today on Cyber Work, Noriswadi Ismail of Breakwater Solutions and the Humanising 2030 campaign joins us to talk about privacy as it pertains both the international business, and cybersecurity, and why it’s important not just to learn about certification variance from continent the continent, but also the cultural variants that shaped them. And via the Humanising 2030 campaign, Noris and colleagues hoped to bring a more ethical and diverse approach to programming and guiding AI in the decade to come. This is a heavy conversation, folks, so get ready to dig deep into the critical issues of our time today on Cyber Work.
[00:01:38] CS: Welcome to this week’s episode of the Cyber Work with InfoSec podcast. Each week, we talk with a different industry thought leader about cybersecurity trends, the way those trends affect the work of InfoSec professionals while offering tips for breaking in or moving up the ladder in the cybersecurity industry.
Noriswadi Ismail leads Breakwater’s global data privacy practice. He is an expert on how data privacy and cybersecurity interact, and he is a subject matter expert on global data privacy laws and regulations, data privacy, and security strategies, and data privacy technology. Noris is a frequent author and guest speaker in global data privacy and an active member of the International Association of Privacy Professionals, IAPP for short. Prior to Breakwater, he was a Managing Director at Ankura and lead GDPR consulting for EY UK. Noris has degrees and certificates from multiple universities including Doctoral Research at Brunel Law School. He is a Certified Information Privacy Professional Asia CIPP/A, and has a practitioner certificate in data protection, PC.dp. Noris, thank you for joining me today. Welcome to Cyber Work.
[00:02:44] Noriswadi Ismail: Thank you so much, Chris, for having me here. Hello from London.
[00:02:48] CS: Very good to have you. Very good to have you from another continent here. To start with, I’d like to get to know our guest a little bit by tracing their interests. Where did you first get interested in computers and tech, and what specifically drew you to privacy as a career interest?
[00:03:07] NI: I think it’s a very fundamental memorially in question, Chris, because it took me like 25 years ago where I studied law. Even prior to the law school, I was exposed with quite a number of technology kind of subjects, which I thought that – starting with patent, because when you looked at a patent, it is about technology, about invention, about novelty. So one of the things that really, really intrigued at the time was about software patent, and that actually led me to more future inquisitiveness. I reached out to my professor, I said, “What do you think about the future of IT law and data law?” She said, “This is something that you really want to actually explore, and you have to start now.” So I think, since then, I never turned back, there’s no U-turn, and I developed the passion in IT law, telecommunications law during the law school.
When I entered to the [inaudible 00:04:04] wall, being a lawyer, I was involved with lots of IT technology matters, technology contracts, data center contracts. At the time, it was mainframe. That was before –
[00:04:15] CS: Of course, yeah.
[00:04:18] NI: There’s lots of very sophisticated kind of network security methods, which involve a lot of, I would say, practical kind of approach. A day at a time was a learning day. Until today, Chris, because technology evolves, technology changes, and it’s very, very interesting to see how exactly the evolution. I never turned back since then.
[00:04:39] CS: Yeah. I imagined that there was a lot more of an emphasis on sort of the physical aspects of the data security with the, like you said, the in-house mainframes and things like that.
[00:04:48] NI: Yeah, exactly. Because on the other hand, there’s the physical security aspect. On the other hand, there’s also the virtual security. There’s also in between, where most others globally, irrespective of region really need to understand the local nuance, the original nuance, and the cultural nuance as well. That’s actually where the exposure considered picture whenever involved with number of conversations with regulators, with multinational organizations, even with consumers, even with my peers. There are quite a number of sophisticated areas that the law couldn’t address. Yes, and that is all black and white, whether it’s a good law or bad law. again, it depends on the risk. How exactly the risk lens is, how exactly the risk posture is, and how exactly the risk driver is, yeah.
[00:05:41] CS: Yeah. I want to just pause to sort of address our listeners, break the fourth wall a little bit. But another one of our UK podcast, friends, and author, Susan Morrow had a similar story. She got into cybersecurity because she was originally in chemistry. She found that she liked learning about securing her findings on the website more than she liked actually doing the findings. Here you’re saying, you started out in patent law, and then you found that you’d like the sort of privacy intersections of that. So I think, just as a word of wisdom to our listeners. If you’re doing something else completely, and something just catches your interest out of the corner of your eye, don’t be afraid to pounce on it, and you might well become the sort of master of a domain that hasn’t even been claimed yet.
[00:06:35] NI: Yeah.
[00:06:37] CS: To move on a little bit, could you talk about some of the formative projects, positions, experiences, and lessons that put you on the path from your early days in patent law and studying telecommunications, towards the shift toward privacy? So you mentioned, obviously, that you saw this opportunity here, but what were some of the sort of big projects that turned you into a subject matter expert in this field?
[00:07:03] NI: I think that’s a very, very good question. My foundation when it comes to technology, contracts, technology negotiation, involved with the big boys, the big boys at a time like Google, Microsoft, IBM, and even at Cisco. Because whenever I actually review the contracts, I was like, “Wow! There are lot of things that I need to learn being a lawyer, in-house lawyer at the time.” So it’s a huge learning curve for the first five to 10 years in my career. Where at the time, when I was in Asia, the data protection, the privacy law was not there yet. And we had to refer to the EU Directive, which is the EU-based kind of directives, which is now the [inaudible 00:07:44] of the GDPR. We dealt a lot about how exactly you need to deal with data, not only from the physical perspective, but also the virtual perspective. So there’s one aspect of it in terms of the contractual negotiations, and as well as dealing with the big players and multinational as well.
Then, on the other hand, throughout my career, during the first 10 years of being an in-house lawyer or general counsel for system integrator company. We also dealt with lots of public sector contracts, where most of the public sector contracts, they wanted to ensure that vendors or providers can really comply with whatever strict requirements, like box ticking exercise. But in reality, as the exact question to the project managers and project directors, can you believe that this is quite over ambitious? And of course, because of the relationship, okay, the answer is yes. But in relative when you deliver the project, there are, of course, a lot of many challenges, because it’s quite new and both the code is really learned throughout the process. It’s actually the beauty when it comes to technology project or technology implementation, Chris. Because on the one hand, you can see how exactly the law intersect with risk, you can see how they intersect with technology, and you can see how the law also intersect with other sophistication of other aspects of law, like taxation, like HR, or even like corporate, commercial, mergers and acquisition, or even other areas of law that really requires clarity.
I would say that the first five to 10 years of my career really shaped the foundation. There are three things that I learned throughout those years, Chris. The first is mastering the principles. Even though you’re not a lawyer, if let’s say, you’re a computer science student, or IT background studied, or even engineering, or even social science, or history, geography, doesn’t matter what your background is. Mastering the principles are important, and mastering the data taxonomy is important. So that’s the first. When you understand the principles and the foundation, the definition, it will remain forever. Doesn’t matter what the law is, right? Whether it’s GDPR, or whether it’s CCPA in California or even elsewhere around the world.
Second is how exactly these principles can be tweaked, and can be actually contextualized within a sector environment, because there are many sectors. Data-driven sector and non-data-driven sectors. When you deal with regulated sectors like financial services, or healthcare, or telecommunications, media, technology, of course, these are like the bulk where the data is being processed. This is where you have to go back to basic by asking the one H, four W kind of approach. The how, what, why, when and who. Go back to basics. From there, you’re able to articulate the third point, which is a key takeaway that I learned throughout the first 10 years of my career is, what are the challenges? What are the pain points and how exactly I can help to ensure that you minimize the risk or I help to be part of that risk minimization conversation? Whether it’s a data risk issue, whether it’s a technology implementation issue, or whether it is something which is intersect between data privacy, cybersecurity, and data governance?
[00:11:08] CS: Yeah. Boy, that’s a whole lot right there. I mean, we could probably break off on any of those three sorts of paths and do a whole episode about that. We might have to have you as a return guest. But yeah, especially if interested in sort of, as you said, the intersection between the black and white of the law and the risk. I think that’s really fascinating. But for now, we want to talk to you a little bit about your current career. Can you talk about your work as Managing Director at Breakwater Solutions? What are some of the common tasks, day to day responsibilities, and what’s an average day look like in this role?
[00:11:45] NI: Well, so I kick off my day at 5:30 in the morning, of course, it’s not just about work. So I start it with meditation, and start it with prayers, and start it with like, having that kind of breathing, inhale, and exhale kind of technique. I think it’s important. Breathing is important for first day of the day. It depends on where you are, doesn’t matter if you’re in London, or New York, or elsewhere. Typically, in my day to day or my daily activities involve five matters. The first is, in terms of helping the organization and clients to really help to improve the current pain points when it comes to data protection, and privacy, current state of maturity implementation, and privacy technology improvement program. Then second. listening. As a practitioner, I have to listen. At times, what I learned a lot throughout at [inaudible 00:12:37] during those years is that, we like to talk but we actually have to pause and listen. So that listening exercise is important, it’s part of my daily role. I will say that I would just like allocate between two to three hours to listen from my peers, and from the market, and from regulators, and even from practitioners that know the market and clients. What is likely they would like to update in terms of what their priorities are.
The third is to actually really being a thought leader, to ensure that helping organizations to really move to the next level, in terms of what they want to aspire and achieve in the current environment in the program. This is important, Chris, because globally, whether you’re in the US, or in London, or in Europe, or in Asia, or Middle East, that kind of integration, and as well as connecting the dots is important. My role is actually to ensure that everyone is aligned in terms of managing expectations, not only from the technology in a delivery perspective, not only from the thought leadership perspective, but also simplifying the narratives, to ensure that everyone is on the same page in terms of what we need to achieve.
The fourth, my day-to-day routine is looking into the people matters, people and culture matters. Because for me, diversity, equity, inclusion is the top on the agenda as part of the delivery team and as part of the top leaders, together with our colleagues in Breakwaters and practitioners, or even with our stakeholders globally, regulators, clients. The DEI is actually an important element, and essentially a part of the conversation that should not be overlooked whenever you discuss, or whenever you argue, whenever you debate, or whenever you descent on certain areas, or even when you pop suggestion so that there’s no unconscious, even conscious bias from certain latter perspective.
The last one on the list, Chris. It’s more about reprioritizing and prioritizing the proposals, and as well as other kinds of engagement so that there is a focus time to be met and to be achieved. Within my five days, I actually put like Wednesdays and Friday for my focus time to do it, because it’s before you get things done, and to ensure that it’s productive based on what exactly you achieve or want to aspire to achieve it in a particular day or week, yeah.
[00:15:09] CS: Yeah. We’ve implemented focus in our office as well, and it’s a stunningly helpful in terms of having unbroken amount of time to think the big thoughts and so forth. What I like about that too is, I ask that question to just about everybody, what’s your day-to-day routine like or what are your common tasks? The two answers we usually get are either, “Well, it’s different from day to day” or they just go into, “Well, first, I check my email and then I do this.” But I like that your work week is sorted by five principles. You’re not thinking in terms of like, “I got to get this done. I got to get this done.” But you’re thinking in the larger sense of like, these are – I mean, I’ve never heard someone more accurately say, I steer the ship than what you just did there. Very, very impressive. Thank you.
Some of our regular listeners will likely remember the episode of Cyber Work Live we hosted earlier this year, a couple of months ago, called the Public Discussion About Privacy Careers. We have had several episodes from one of InfoSec instructors, Chris Stephens, about privacy and its various certifications around the world. InfoSec tends to connect privacy and cybersecurity quite closely as career tracks and work objectives. We do our best to prepare new professionals for roles in either both areas of expertise. What do you think the state of international privacy is like right now, Noris as it pertains to how cybersecurity and risk plans are being formulated, especially with multinational organizations?
[00:16:40] NI: That is a very, very simple answer, but hard to – simple question, but hard to answer. Should I rephrase it that way. If you look at 20 years back, Chris, and the listeners and audience, the data privacy law is very like a patchwork. Because the US or the central law, prior to the current evolution in the 50 states in the US, even at a federal level. At the time, the EU has its own directive, 95/46/EC directive. There are quite a number of sectors, especially laws outside the EU, or outside the US, which actually really govern and regulate the data processing activities, mostly financial services kind of sector. Since then, we’re seeing the evolution of international data privacy framework, or regulation evolved. In a way, it improves by leaps and bounds, I think, partly because how GDPR actually has influenced the other non-EU landscape and the decision to review the current activities. That’s one lens.
Another lens that I’ve viewed, Chris, is that, having observed, and having engaged, and having listened, and also having – understood, or what exactly the stakeholders’ concern from the eastern side, and the western side, and the middle-eastern side. There’s also certain cultural, and as well as regional nuances and differences. Because I think one thing that some of the principles really need to think about is that to look into more from the cultural perspective, as well. I think there’s still a lot of work to be done, and I will say that, “Because of the GDPR regulation framework, it has actually given a much more gradual shift for outside the EU to really improved the current framework. Even though currently, GDPR is undergoing the GDPR 2.0, they call it to review, and to actually look into what is it that they have done right, and what is it that they have done a little bit wrong, but what they could have done better.
[00:18:42] CS: Yeah, and a lot of things that they didn’t really know were going to happen at the time. I’m sorry. Go ahead.
[00:18:47] NI: Yeah.
[00:18:48] CS: I think it’s worth noting too, because obviously, we’re very sort of certification based in terms of being a training academy and so forth. When we talk about the different IAPP certifications for different parts of the world, I want to make sure that people are not thinking just in terms of like, “Okay. Well, I just need to know which of the checkboxes applies to Asia, which ones apply to Canada, which ones apply to Europe in terms of like, I just have to keep them separate in my head.” But I think, going through the framework of also the cultural differences that inform those privacy variants. Now, do you have a sense – do you feel like the IAPP certifications accurately sort of prepare people in studying for those certifications for the cultural variants or is that something you had to kind of learn on your own?
[00:19:38] NI: I really like that question, Chris. I’m going to be very biased, because I normally sit as one of the IAPP Asia advisory board member, and I’m currently also an active member of IAPP European advisory board for 2022 and 2023. We very, very understood what exactly how is the market demand is and the market feedback as well. It’s very challenging during those early years as part of the certification kind of planning to look into, should we cover some of the jurisdictions that have the laws? Or should we cover the jurisdictions that have certain maturity in terms of enforcement, or depending on how the regulators actually like really enforce the law? I think as far as the certification is concerned that has been introduced by IAPP, I would say that it would be like the model of certification, where any student, or any professionals who wants to upskill, doesn’t matter. You have a cybersecurity background or data governance background; it depends on the level of upscaling.
I will say there are three certification that you can really look into. Let’s say you don’t have a legal background, start with CIPM, which is more from the management side of things. How to do it, it’s more about how to do and how to operationalize a privacy program. If let’s say you have a CIPD, you have a very, very strong cybersecurity background, but you want to specialize a much more towards a more regulatory focused kind of certification, you might want to explore CIPP/E for Europe, or A for Asia, or C for Canada, and as well as other jurisdictions that IAPP actually has outlined in a certification program. If let’s say you want to have a much more like in between, like privacy engineering kind of lens, and to upskill yourself, you might want to consider CIPT, which is more for the technology side of things. It’s actually beautiful because you can have the options to choose, but importantly, consult those who have actually have undergone the certification. Also, most of them will be able to give the relevant feedback.
But importantly, I always tell aspiring privacy leaders or future privacy leaders that certification is not a big license for you to be a specialist. It’s actually the experience. So, yes, certification is good. But once you’re involved with the project, once you’re involved with the right exact implementation, not only from the privacy engineering perspective, not only from the regulatory perspective, like for GDPR, or even Asia, or even Canada, or even America. But you are involved with a very specific, sophisticated project, then you’re upscaling to a certification that’s much more accelerated. Again, it depends on the exposure, it depends on the mentoring, and also, it depends on how exactly that you want ambition to upskill yourself in this IAPP certification. But in short, to answer your question, it’s a very good certification and highly encourage anyone who wants to upskill to sit for this exam.
[00:22:40] CS: Speaking to what you said there, that it doesn’t necessarily confer upon you like instant expertise, but it’s a good upskilling method. Do you recommend – if you’re just getting started in the industry or want to put your foot in the door, do you start with the certification and use that knowledge as a baseline to get your first experience? Or does it make more sense to get your first experience and then upskill when you feel yourself, like your natural abilities started hitting the wall?
[00:23:13] NI: I like that question and reflection, Chris. I think there are two strategies for candidates, for talents to do this. If I were to be a candidate who doesn’t have a legal background, or assuming a cybersecurity background, or even HR, or marketing. But I would like to get involved with my first data privacy project, right? I will invest my time to do a pro bono to offer myself to get a small, small project for charity. Charity which has an app, and they process personal data. Then, well, you start with small project and the same time, in parallel, seek the certification. You can see the relevance, some of the principles, and also, the project that you’re involved, assuming that you will get the opportunity by a particular charity or by way of pro bono. That’s one scenario that candidate might want to actually explore.
Another scenario is assuming that you are already in the compliance department, or cyber security department, or HR department, or marketing department. Again, it depends on the governance structure of your organization. Whether it’s centralized, decentralized, or hybrid. It’s good to actually think about data on a day-to-day basis when you perform your function, even though it’s monotonous, but there’s always data risk there. Then, get involved with certain data privacy activities, being a privacy champion, being a security champion, and then start to link which certification might be relevant for you as part of your career progress. I would say that two lenses are actually like, I would say, the technical and/or, I will say strategic approach. Importantly, get that buy in from your line manager. Get that buy in from the wider stakeholders to ensure that there is support, the current growth and aspiration.
But assuming that you are not attached to an organization, but you are an entrepreneur, or you’re like a developer, or you’re a coder, there’s no harm for you to start, consider this application as well. Because it’s important, because every single second we deal with data, and there’s all those risks there.
[00:25:21] CS: Yeah, that’s great. Just to put a button on that, don’t feel like you have to make this decision on your own, like you say, talk to your supervisors, and talk to your team, and see what they need from you. Your path might make more sense than you realize. I was originally planning on talking about privacy and security for this whole episode. Clearly, we would have enough material to do two, or three, or five, or 10 episodes. But I went over to your website, noriswadiismail.com, and I noticed a section for something called Humanising 2030 and it really kind of intrigued me a lot. I hope you don’t mind talking more about it with me. To get started, what are the objectives of Humanising 2030. It’s very exciting, but I have to say, a very ambitious project, considering how close we are to that year right now, about eight years off here. Seven and a half years.
[00:26:14] NI: Yeah, Chris. Yeah, that’s a very, I would say – I’m very impressed when you highlighted that Humanising 2030. Because the motivation, or perhaps inspiration why I kicked off the Humanizing 2030 initiatives, partly because when I was in Oxford, Saïd Business School. At the time, it was during the pandemic. I attended the Oxford Scenarios Program, Saïd Business School business in the University of Oxford. One of the modules is about future prediction. At the time, I actually highlighted about, all right, are we going to be more human when the robot will take over the role, or at the top in activities so to speak. Will the robot take over 100% of the tasks, or will actually, the robot will actually automate everything what we do, and there will be a much more less of kind of human interaction? Then the question is, where exactly that kind of conscious and unconscious bias are or is? And to what extent that we need to deal with this kind of psychological, kind of element, whenever there’s a human involvement in intervention throughout the workflow, or throughout the process. It doesn’t matter in an app environment, system environment, as well as a large-scale automated environment. Again, depending upon what exactly the sector of the business is in the future.
One of the inspirations is actually from Professor Richard Susskind, where he highlighted in his book – I mean, he’s a great inspirational leader, professor, and consultant that actually advocated about AI. One of his hypotheses, which is happening today is, okay, the technology, the robot will take over the human activities. Humanising 2030 is about addressing some of these gaps, and understanding why these gaps are not actually addressed leading to 2030, and to what extent that different geographies and different stakeholders really need to understand there’s a cultural nuance, there’s original nuance, and there’s also the coders. Because who actually designed the algorithm, is human being, is predetermined. To what extent that the code is actually designed is based on certain instruction, certain workflow. Who is actually the eye, the brainchild behind is human being. We, human being.
That kind of diversity, equity, inclusion from the bias lens perspective in my view needs to be really discussed, and to be unveiled, and still work in progress. Humanizing 2030 is not just about addressing this, Chris, is also looking into certain multidisciplinary areas that I think that we, human being, we definitely still be relevant and prevailing. Doesn’t matter whether the robot will take over 60%, or 30%, or even 70% of the work, but there should be an element of human involvement towards the improvement of that particular AI environment. It’s still a work in progress by other jurisdictions around the world as well. Yeah.
[00:29:24] CS: Yeah. It’s super exciting, and I can’t wait to hear more about it. I want to start by talking about AI bias and geopolitical impact. I mean, I’ve talked to past guests about several examples of AI bias, whether it’s face recognition technology that recognizes white skin and being improperly created to approach non-white skin tones, or texts that don’t deal with non-standardized language constructions, et cetera. Can you give me some more examples of current AI bias and the ways that this project is working to untie these knots and make AI-based projects more ethical?
[00:29:59] NI: Sure. Coming from a [inaudible 00:30:01] background, I perceived that there’s also certain bias in terms of pronunciation. There’s also bias in terms of expression. There’s also bias in terms of the articulation. There’s also bias in terms of even looking into the observation and how you construct sentences, assuming that you’re not a native Caucasian. This is actually what we are seeing and observing today as a [inaudible 00:30:29]. I think that there’s still a lot of work to be done in this space. Governments around the world, especially the Western government think that, all right, this is just nice to have, not must to have. The answer is no, because you want to be very inclusive, you want to have a very much diverse kind of culture. You have to look beyond the skin. You have to look into the neuroscience of diversity, rather than a particular aspect. or label within a particular DEI lens. Then I think, there’s quite number of efforts from the UK Government on this matter, and as well as certain parts in the US, certain parts in Europe like the Netherlands, also in France, and in Germany. It’s still a very delicate kind of debate, Chris. Because there’s all this kind of conscious, and unconscious bias whenever we actually want to design something.
Then, I think, going back to your question, in short, I would say there are two things that stakeholders really need to rethink and restrategize. The first is, never think that, that kind of bias should be by default, that everyone should follow that. Then second, there are lots of dataset that can support that, because the more dataset, or the more diverse dataset, that is even better towards particular organization or particular mission that you want to achieve in terms of what you want to achieve using AI. And, of course, we talked about the AI. It’s not just about trying to humanize AI, or trying to make it more diverse from the DEI lens, but also understanding the risks as well, the data behind it. Because it’s also sensitive personal data that might be triggered by or being processed, assuming that you are processing sensitive personal data beyond the white skin, as you mentioned a while ago, and it’s still work in progress, Chris.
[00:32:32] CS: Right. Or the way people are using technology, if there’s physical disabilities, or neurodiversity involved, like yeah. Not everyone is using the same hotkeys, or taking things the long way around, or whatever. There are so many different points of interrogation that I think are really well served by this initiative. One of the objectives of Humanising 2030 is understanding when diversity, inclusion is being overlooked and compromised. You’re talking about that already, and that’s a topic we’d like to talk about a lot here at Cyber Work. It’s not just a nice idea, but an absolute necessity as future security issues will become more complex, need a wider variety of skills, experiences, background, life histories.
We talked about the importance of this already. Can you talk about some of the specific strategies that you have in mind to make diversity, inclusion less of a thing that you listen to about on a yearly video for a half hour, and get your certificate, and forget about, and more of an organizing principle for your company?
[00:33:35] NI: I like that question, Chris. In my view, I think – I will be very honest when it comes to this topic, Chris, and with the listeners. It starts from home; it starts from the family. Then from there, you can actually like really gradually build that the DEI in a work environment, and then you can also expand that in a market with stakeholders that you’re dealing with, with your peers, with stakeholders, or partners that you’re dealing with. It’s actually where, okay, it needs to be started or kicked off. But in reality, what we’re seeing in the market doesn’t matter whether it’s in Europe, US, the UK, or even in Asia. This is more like, okay, we need to do this, like a box ticking exercise, rather than having that very strong tone from the leadership and cascade it to the wider employees and members of staff, even to vendors when it comes to DEI conversation or DEI agenda. That’s actually one angle that we need to think about to be better.
Then the second is in relation to how the DEI should be balanced with the mission and vision of a growth of the company. And the same time, from the data risk perspective, it should not actually be bias, especially dataset, and it should be quite inclusive in the sense that, have we taken into consideration the dataset outside the US, or the dataset outside Europe, or even dataset in Asia, or even in Middle East. Assuming that you’re dealing with global stakeholders, having global employees in terms of D and I. I think that, that kind of teams need to be actually really, I would say, established as fundamental. Once you’ve established that, then it’s important to really dissect and cherry pick which one that we need to focus, and which one we need to prioritize, and which one, or which dataset that we need to really observe, and understand, and learn, and even listen from a “What is all this?” So that data is actually a workable plan, to really resonate with the strategy.
[00:35:43] CS: Okay. To take that to the organizational level, Noris. For organizations that want to create a more diverse, and inclusive workspace, and for our purpose, a security team, but claim that, “We just don’t get diverse candidates applying.” What are your ideas and strategies to make sure that your hiring team is looking in all the right places, and not say using, for example, AI resume software to intentionally or unintentionally filter out candidates who don’t adhere to certain narrow sets of requirements?
[00:36:10] NI: That is actually an ongoing challenge, Chris, in terms of the recruitment, onboarding. If you ask me, okay, you should actually reach out to this person, because this person worked with me many, many years with a client, and there is bias. I have actually experienced that and many, many times, but I actually paused for a while, I said, “No, I should not do this.” Very objective exercise. When we look into the AI kind of intervention in this process. Again, it has to be very realistic, very, very objective, but taking into account variety of data set. I think a number of global organizations have started to write really – pick off a blind CV review by not putting names, encourage candidates not to put the surname, first name and second name, or not to put the postcode. Because when you put a postcode of a particular location, and prospective employers will say, “Oh, you will come. You’re actually originated or come from quite a low-income background.” That actually will do that kind of bias, even at the screening stage.
We also try to look into an area where CVs, okay, doesn’t mean that you’re graduated from Harvard, or Yale, or Ivy League, or Oxbridge, or even the top 100 universities, you will be admitted as like differ kind of applicants, and we human being, we like that, right? “Oh, you graduated from Harvard. Okay, come on. Okay, you do the interview.”
[00:37:38] CS: Right. “We’ve got another person on our team”.
[00:37:40] NI: That kind of process, we just want to see how exactly your soft skill, or your emotional quotient, emotion intelligence can really value the current organization to be part of the growth. But, of course, it’s easy to say than done, or to do it, but it’s still work in progress. Even like fortune 500 companies, fortune 100 companies, and the top 100 companies in the world, especially from people and culture perspective. They are taking a much more like a hybrid approach, a combination of AI and human kind of interaction, and maybe very much like learning what they have done wrong, and improve it better on a day-to-day basis.
[00:38:22] CS: Yeah. Thank you for that. That was very good. Very good response now. We’re coming up on the end of time here, but I really want to – before we start to wrap, I want to talk – is there anything else about Humanising 2030 that – we’ve sort of covered a very narrow range of it, but I wanted to at least leave our listeners with sort of an overarching sense of all of the goals of Humanising 2030. Can you talk briefly about some of the other aspects of the project that we haven’t covered so far, and also, what some of the sort of concrete plans are to actualize some of these things?
[00:38:59] NI: Sure. There are three activities that Humanising 2030 aim to achieve. The first is, we would like to look into the top 100 apps, applications in the world, where existing applications have certain element of bias in terms of the way they actually like perform data, processing personal data, and if there’s an element of unconscious bias, and both conscious bias, which is not easy to identify, because it requires a lot of analysis from that. So that’s the first. The second is looking into the geography, and also the culture as well. Because each of these geography and culture has different understanding about diversity. Each of the geography has a different understanding about inclusivity, and geography has a different understanding about even very basic kind of impression about DI for example. That’s the second bit, more from the psychology call, and as well as neurodiversity science perspective. This is where those who really want to get involved, volunteers, whether you are existing professionals, or estate, doctoral researchers, or even students who would like to be part of this, feel free to reach out to me on that.
Then the third element that we’re looking into is, what’s exactly the data risk when you do this. When I talk about data risk, it means there’s element of cybersecurity impact. And that when you process the data in the current apps, there’s also an element of privacy risk as well, and there’s an element of data governance as well in the context of these apps. When I say shortlisting 100 app, it doesn’t mean that we would like to see default, or to actually tell the providers that, this is wrong. You should improve it. No. We just want to see the current approach, and the current kind of level of understanding and how they really view this in the current environment.
Of course, 2030, of course, we have eight years, Chris to achieve that. There’ll be lots of changes as well, as we learn throughout the process. I think the end goal, Chris is to share with the wider stakeholders, with regulators, with organizations, and of course, I would like to actually develop this as part of a podcast, maybe a future podcast with you, Chris. Perhaps a digital book on this, perhaps in 10 years.
[00:41:22] CS: So as we wrap up today, Noris. Can you tell us more about Breakwater Solutions, and some of the products or projects you’re excited to talk about, and work in the months to come? And I guess to add to that, if people want to get involved with Humanising 2030, how can they do that?
[00:41:37] NI: Sure. Breakwater is actually a global data risk management solution. James is actually our leader, being a CEO, formerly IBM executive. We have range of products that really help to improve, automate, and ensure this level of productivity versus cost. We have the consulting arm in which I am sitting within the global consulting practice, being led by Rebecca Patterson, who is one of the co-founders in Breakwater, and working very closely with Jim Vint as well, who is one of the co-founders in Breakwater. Within the global consulting practice, we have global data privacy, cybersecurity, information governance, risk and compliance, investigations and data analysis and strategy. So it’s very holistic, it depends on what is actually the client’s pain point, the client’s challenges, and also, what exactly the client wants us to really help them to approach some of the ongoing issues or improvement within the organization. Irrespective of where you are, whether you’re in the US, UK, Europe, Asia, and Middle East. We’re very global, we’re very flexible, and we’re very agile as well.
Currently, we’re involved with quite a number of projects like analytics, and as well as investigation, and cybersecurity assessment, and data privacy implementation. For example, data storage implementation for clients in the US. We also have certain projects with clients in Middle East. Also, other areas in the pipeline would be very much into the development of the Middle East and North Africa landscape, where we really want to have clients to really ensure that they start the fundamentals right, and then they evolve it through improvement towards the level aspired maturity. In short, we are a data risk management solution. We want to actually help organizations, stakeholders, whether your public sector or private sector first to improve where you are today, and to ensure that there’s a cost effectiveness and efficiency in relation to your technology deployment.
Third, the holisticness, as well as the level of agnostic is actually a unique proposition. Again, we’re not competing with other businesses like us, but we are more than happy to partner as long as we have the synergy to achieve the client’s expectation as well.
[00:43:52] CS: Nice. Regarding Humanising 2030, is that something where you’re looking for volunteers, or you’re looking for kind of like people from other companies who you can partner with? How can people get involved with that if they want to?
[00:44:07] NI: Sure. Well, we are looking towards partnership, especially for those who really want to know more about Humanizing 2030. And volunteers, especially from all continents around the world; US, UK, Europe, Asia, Middle East. Because the more volunteers that are involved, the more inclusive that Humanising 2030 would be. Feel free to reach out to me via LinkedIn, Noriswadi Ismail. There’s a LinkedIn profile there, and email me, noriswadi.ismail.breakwatersolutions.com. I’m reachable and protectable.
[00:44:45] CS: Wonderful. Well Noris, thank you so much for your time and insights. This was an absolute joy to speak with you.
[00:44:50] NI: You’re welcome, Chris. Thanks so much for having me here, and productive ahead in the remaining months of the year in 2022, and maybe advanced 2023 for all.
[00:45:01] CS: Absolutely.
[00:45:02] CS: As always, I’d like to thank all of you who are listening to and watching the Cyber Work podcast on, as I said, an unprecedented scale. In the last three months, you’ve helped us to more than double Cyber Work viewership on YouTube. For that, I’m thankful and humbled. So if you’re subscribing, thank you. If you’re watching when it goes live at Mondays, on Mondays at 1:00 PM Central, thank you again. And if you’re telling friends, and colleagues, and sharing it around, thank you times three. We’re delighted to have you all along for the ride.
In wrapping up, every week on Cyber Work, listeners ask us the same question. What cybersecurity skills should I learn? Try this. Go to infosecinstitute.com/free to get your free cybersecurity talent development eBook. It’s got in-depth training plans for the 12 most common roles, including SOC analyst, penetration tester, cloud security engineer, information risk analyst, privacy manager, secure coder and more. We took notes from employers and a team of subject matter experts to build training plans that align with the most in demand skills. You can use the plans as is or customize them to create a unique training plan that aligns with your own unique career goals. Once again, that’s infosecinstitute.com/free or click the link in the description below to get your free training plan, plus many more free resources for Cyber Work listeners. Do it. infosecinstitute.com/free.
Thank you very much once again to Noriswadi Ismail, and Breakwater Solutions, and Humanizing 2030. Thank you all so much for watching and listening. We’ll speak to you next week.