In an age of escalating data breaches and regulatory fines, stakeholders are not just interested in the solutions you deliver. They’re also critically concerned with how securely and responsibly those solutions handle data. As product managers, project managers, and software developers, the onus of building trust doesn’t only lie in providing functional features but also in demonstrating an unflinching commitment to security and privacy. In this article we explain the principles of Secure by Design and the more modern concept of Defence in Depth.

The terms ‘Secure by Design’ and ‘Privacy by Design’ have become more than mere buzzwords; they represent a fundamental shift in approach to software design, an ethos that signifies how you respect the data you work with, and by extension, the users who own that data. It’s a critical differentiator that not only safeguards against legal repercussions but also offers a competitive edge. Why? Because businesses that prioritise these security principles are seen as trustworthy stewards of data, making them more attractive to clients and partners alike.

What is meant by Secure by Design?

Secure by Design is an approach that integrates security measures into the IT systems’ development lifecycle, rather than adding them later, for more efficient, effective, and holistic protection against threats.

Why is Secure by Design important?

Understanding the underlying principles of Security/Secure by Design is not just a ‘nice-to-have’ but an essential element of responsible business practice in today’s digital ecosystem. These principles provide the foundational knowledge required to embed security and privacy into the DNA of your projects, thereby allowing you to innovate with confidence, keep your customer’s trust, and importantly, stay ahead of the regulatory curve.

So what exactly are these security principles and why are they so crucial for your IT and analytics projects?

Secure by Design Principles

A commonly accepted framework is based on the seven principles initially outlined by Saltzer and Schroeder back in 1975 – they are still totally relevant today!

  1. Least Privilege

    Limit users’ access rights to the bare minimum necessary to complete their job functions. Reducing access privileges minimises the potential damage from accidental mishaps or intentional malfeasance.

    Example: A regular employee shouldn’t have admin rights to a critical system nor any access to a system they don’t use; they should have only the permissions necessary to perform their job.

  2. Fail-Safe Defaults

    By default, systems should deny all access and only grant permissions when explicitly required. This minimises the chance of unauthorised access due to overlooked configurations. You will need a process to identify and establish secure defaults.

    Example: Imagine a cloud storage service where a user can create folders to share files with other users. In a fail-safe default setting, every newly created folder would be set to “private” by default. This ensures that files cannot be accessed by anyone other than the creator unless explicit permission is given to share with specific individuals. This is in contrast to a folder being “public” by default, which would expose all its contents to anyone with access to its URL.

  3. Economy of Mechanism

    This principle advocates for keeping the design as simple and small as possible. Simpler systems are easier to secure and test, as they present fewer potential points of failure or exploitation.

    Example: Using a monolithic architecture with many dependencies can create numerous potential software vulnerabilities. Switching to a microservices architecture allows you to isolate functionalities, making it easier to secure individual components.

  4. Complete Mediation

    Complete mediation ensures that every request to access an object or resource is checked against the security policy, every time, without fail. This ensures there are no loopholes or backdoors that allow unauthorised access.

    Example: Imagine an application that grants access based on a single login at startup. If the security check is not mediated completely, a user might gain more access than intended if their roles change during the session. Complete mediation would re-check permissions continuously or whenever a new resource is accessed to ensure consistent application of security policies.

  5. Open Design

    The principle of Open Design states that the architecture and design of a system should be openly accessible and not considered the key to system security. In essence, even if an attacker knows how the system is built, it should still not be possible for them to compromise the system.

    Transparency in design promotes collective security responsibility. It allows third-party services to run a vulnerability assessment on your system, thereby improving its overall security.

    Example: Consider the rise of open-source software like Linux. Because its design is open, a wide community of developers can inspect the code for security vulnerabilities. Any discovered flaws can be addressed quickly, making the system more robust against potential security risks.

  6. Separation of Privileges

    Separation of privilege is a security design philosophy that advocates for requiring multiple conditions to be met or mulitple actors to be involved before allowing access or the execution of a function. Rather than relying on one single authentication factor or control measure, the principle calls for multiple independent mechanisms to substantiate the legitimacy of a request or operation.

    The benefit of this is two-fold: firstly, it significantly decreases the risk associated with any single point of failure or vulnerability. Secondly, it enforces an additional layer of validation, ensuring a more robust and secure environment.

    Example: In the context of a software development process, imagine that code cannot be pushed to production by a single developer, regardless of their permissions. Instead, the system could require that code must first be reviewed and approved by a peer (condition 1), pass automated testing (condition 2), and then receive final approval from a team leader or manager (condition 3) before it gets deployed. This multi-tiered approach ensures that several checks and balances are in place, making it far less likely for insecure or malicious code to slip into the production environment.

  7. Least Common Mechanism

    This principle advocates for minimising the use of shared resources or mechanisms to reduce the potential avenues for unauthorised access or data leaks.

    When multiple users or processes share a common mechanism (like a library or a database), the chance that a compromise in one area could lead to a compromise in another area increases. By limiting these common mechanisms, you reduce the risk of a chain reaction of security failures.

    In software development, it’s common for multiple users to be interacting with an application at the same time. A classic mistake is to use a common or easily guessable session identifier for all users. If one user gains access to this common identifier, they could potentially impersonate any other user, leading to a massive security breach.

    The principle of Least Common Mechanism would suggest that each user should be given a unique, randomly-generated session identifier. This way, if one user’s session identifier is compromised, it doesn’t put all the other users at risk. Each session is effectively isolated from the others, reducing the risk of a single point of failure.

  1. security principle of psycological acceptability - photo of a woman biting a pencil in frustration in front of a computer screen

    Security measures that frustrate people can defeat their own purpose. Photo by on Unsplash

    Psychological Acceptability

    The principle of Psychological Acceptability ensures that security measures serve their purpose without hindering the user’s experience. You should aim to strike a balance between robust security and user convenience.

When security protocols are cumbersome or complex, users may seek ways to bypass them, thereby defeating the purpose of having security in the first place. You will increase the likelihood of your security measures being used effectively and consistently if you make sure that they are as transparent and user-friendly as possible.

Example: Two-factor authentication (2FA) is a commonly used security measure. However, if implementing 2FA requires a user to use an additional hardware token, carry out multiple steps, and spend extra time every time they log in, users may resist its implementation.

Now, consider a more psychologically acceptable implementation of 2FA that uses a mobile app. The user simply receives a push notification on their phone when trying to log in, and they can approve it with a single tap. This method is fast, simple, and minimally intrusive, thereby encouraging users to adopt it willingly.

You should aim to strike a balance between robust security and user convenience. “Psychological Acceptability” ensures that security measures serve their purpose without hindering the user’s experience.

Defence in depth

Beyond the key security principles above, the Defence in depth approach demands that you deploy multiple layers of security controls (physical, technical, and administrative) so if one fails, others still provide protection from attack.

The approach acts as a safety net. It provides a multi-faceted security posture that makes it considerably more challenging for unauthorised users to gain access to sensitive information or systems. If one layer fails, there are additional layers to provide backup functionality.


Imagine an online banking application. Defence in depth would entail not just requiring a username and password (layer 1), but also deploying multi-factor authentication (layer 2), like sending an OTP (One Time Password) or authenticator prompt to the user’s registered mobile phone. Beyond this, the application could use network firewalls to filter traffic (layer 3), encryption to secure the data both at rest and in transit (layer 4), and implement regular security audits (layer 5).

But it doesn’t stop at just technology; employee education about phishing scams could serve as another layer (layer 6). If someone were to receive a phishing email trying to obtain sensitive customer data, the employee trained in identifying such scams would act as another defensive layer to prevent a potential security breach.

Each layer aims to mitigate risks, minimise the attack surface area and protect against different types of vulnerabilities. The more layers you have, the more resilient the system becomes against attacks, both expected and unexpected. An in-depth approach provides a comprehensive, holistic approach to securing your software system assets and should be an integral part of your security strategy.

To further demonstrate the layering in the defence in depth approach, we’ve listed an assortment of security controls that can be involved in a single solution.

Network SecurityData & Storage SecurityAccess ControlDevelopment & CodingCloud Security
Web Application FirewallEncryptionIdentity and Access ManagementDevSecOpsInfrastructure as a Service
Virtual Private NetworkData MaskingSingle Sign-OnAPI SecurityPlatform as a Service
FirewallsData Loss PreventionMulti-factor AuthenticationCode ScanningContainers
Intrusion Detection SystemHardware Security ModuleOAuth 2.0Git Hooks
PatchingAccess Control ListsContent Security Policy

Find help with application security

By incorporating these principles into your IT or analytics software development project, you’re not only securing it against today’s threats but also future-proofing it for tomorrow’s challenges. Of course, strong security and privacy go together, like lock and key – don’t go underdone with either! If you’re lacking skills and resources, seek out reputable security by design consultancy services.

Stay tuned for our forthcoming article on implementing the principles of Privacy by Design.

Main photo by Alina Grubnyak on Unsplash

News Categories