Securing the Insecure

The 33 years old, Craig Spencer returned back to USA on 17th October from Africa after treating Ebola patients. Just after few days, he was tested positive for Ebola. Everyone was concerned - specially the people around him - and the New Yorkers. The mayor of the New York came in front of the media and gave an assurance to its citizens - that they have the world's top medical staff as well as the most advanced medical equipments to treat Ebola - and they have been prepared for this for so many month. That for sure might have calm down most of the people.

Let me take another example.

When my little daughter was three months old, she used to go to anyone's hand. Now - she is eleven months and knows who her mother is. Whenever she finds any difficulty she keeps on crying till she gets to the mother. She only feels secured in her mother's arms.

When we type a password into the computer screen - we are so much worried that, it will be seen by our neighbors. But - we never worried of our prestigious business emails been seen by NSA. Why ? Either its totally out of our control - or - we believe NSA will only use them to tighten national security and for nothing else.

What I am try to say with all these examples is, insecurity is a perception. Its a perception triggered by undesirable behaviors. An undesirable behavior is a reflection of how much a situation deviates from the correctness.

Its all about perception and about building the  perception. There are no 100% secured systems on the earth. Most of the cryptographic algorithms developed in 80s and 90s are now broken due to the advancements in computer processing power.


In the computer world, most developers and operators are concerned about the correctness. The correctness is about achieving the desired behavior. You deposit $ 1000 in your account you would expect the savings to grow exactly by 1000. You send a document to a printer and you would expect the output to be as it is as you see it on the computer screen.

The security is concerned about preventing undesirable behaviors.


There are three security properties that can lead into undesirable behaviors, if those are violated: confidentiality, integrity and availability.

Confidentiality means protecting data from unintended recipients, both at rest and in transit. You achieve confidentiality by protecting transport channels and storage with encryption.

Integrity is a guarantee of data’s correctness and trustworthiness and the ability to detect any unauthorized modifications. It ensures that data is protected from unauthorized or unintentional alteration, modiļ¬cation, or deletion. The way to achieve integrity is twofold: preventive measures and detective measures. Both measures have to take care of data in transit as well as data at rest.

Making a system available for legitimate users to access all the time is the ultimate goal of any system design. Security isn’t the only aspect to look into, but it plays a major role in keeping the system up and running. The goal of the security design should be to make the system highly available by protecting it from illegal access attempts. Doing so is extremely challenging. Attacks, especially on public endpoints, can vary from an attacker planting malware in the system to a highly organized distributed denial of service (DDoS) attack.


In March, 2011 the RSA corporation was breached. Attackers were able to steal sensitive tokens related to RSA SecureID devices. These tokens were then used to break into companies that used SecureID.

In October, 2013 the Adobe corporation was breached. Both the source code and the customer records were stolen - including passwords.

Just a month after the Adobe attack, in November, 2013 - the Target was attacked and 40 million credit card and debit card data were stolen.

How all these attacks are possible? Many breaches begin by exploiting a vulnerability in the system under question. A vulnerability is a defect that an attacker can exploit to effect an undesired behavior, with a set of carefully crafted interactions. In general a defect is a problem in either the design or the implementation of the system so that it fails to meet its desired requirements.

To be precise, a flow is a defect in the design and a bug is a defect in the implementation. A vulnerability is a defect in the system that affects security relevant behavior of a system, rather than just the correctness.

If you take the RSA 2011 breach, it was based on a vulnerability in the Adobe flash player. A carefully crafted flash program when run by a vulnerable flash player, allowed the attacker to execute arbitrary code on the running machine - which was in fact due to a bug in the code.

To ensure security, we must eliminate bugs and design flows and make them harder to exploit.

The Weakest Link

In 2010, it was discovered that since 2006, a gang of robbers equipped with a powerful vacuum cleaner had stolen more than 600,000 euros from the Monoprix supermarket chain in France. The most interesting thing was the way they did it. They found out the weakest link in the system and attacked it. To transfer money directly into the store’s cash coffers, cashiers slid tubes filled with money through pneumatic suction pipes. The robbers realized that it was sufficient to drill a hole in the pipe near the trunk and then connect a vacuum cleaner to capture the money. They didn’t have to deal with the coffer shield.

The take-away there is, a proper security design should include all the communication links in the system. Your system is no stronger than its weakest link.

The Defense in Depth

A layered approach is preferred for any system being tightened for security. This is also known as defense in depth. Most international airports, which are at a high risk of terrorist attacks, follow a layered approach in their security design. On November 1, 2013, a man dressed in black walked into the Los Angeles International Airport, pulled a semi-automatic rifle out of his bag, and shot his way through a security checkpoint, killing a TSA screener and wounding at least two other officers. This was the first layer of defense. In case someone got through it, there has to be another to prevent the gunman from entering a flight and taking control. If there had been a security layer before the TSA, maybe just to scan everyone who entered the airport, it would have detected the weapon and probably saved the life of the TSA officer. The number of layers and the strength of each layer depend on which assets you want to protect and the threat level associated with them. Why would someone hire a security officer and also use a burglar alarm system to secure an empty garage?

Insider Attacks

Insider attacks are less powerful and less complicated, but highly effective. From the confidential US diplomatic cables leaked by WikiLeaks to Edward Snowden’s disclosure about the National Security Agency’s secret operations, are all insider attacks. Both Snowden and Bradley Manning were insiders who had legitimate access to the information they disclosed. Most organizations spend the majority of their security budget to protect their systems from external intruders; but approximately 60% to 80% of network misuse incidents originate from inside the network, according to the Computer Security Institute (CSI) in San Francisco.

Insider attacks are identified as a growing threat in the military. To address this concern, the US Defense Advanced Research Projects Agency (DARPA) launched a project called Cyber Insider Threat (CINDER) in 2010. The objective of this project was to develop new ways to identify and mitigate insider threats as soon as possible.

Security by Obscurity

Kerckhoffs' Principle emphasizes that a system should be secured by its design, not because the design is unknown to an adversary. Microsoft’s NTLM design was kept secret for some time, but at the point (to support interoperability between Unix and Windows) Samba engineers reverse-engineered it, they discovered security vulnerabilities caused by the protocol design itself. In a proper security design, it’s highly recommended not to use any custom-developed algorithms or protocols. Standards are like design patterns: they’ve been discussed, designed, and tested in an open forum. Every time you have to deviate from a standard, should think twice—or more.

Software Security

The software security is only a part or a branch of computer security. Software security is kind of computer security that  focuses on the secure design and the implementation of software, using best language, tools and methods. Focus of study of software security is the 'code'. Most of the popular approaches to security treat software as a black box. They tend to ignore software security.

In other words, it focuses on avoiding software vulnerabilities, flaws and bugs. While software security overlaps with and complements other areas of computer security, it is distinguished by its focus on a secure system's code. This focus makes it a white box approach, where other approaches are more black box. They tend to ignore the software's internals.

Why is software security's focus on the code important?

The short answer is that software defects are often the root cause of security problems, and software security aims to address these defects directly. Other forms of security tend to ignore the software and build up defenses around it. Just like the walls of a castle, these defenses are important and work up to a point. But when software defects remain, cleaver attackers often find a way to bypass those walls.

Operating System Security

Let's consider a few standard methods for security enforcement and see how their black box nature presents limitations that software security techniques can address.

When computer security was growing up as a field in the early 1970s, the operating system was the focus. To the operating system, the code of a running program is not what is important. Instead, the OS cares about what the program does, that is, its actions as it executes. These actions, called system calls, include reading or writing files, sending network packets and running new programs. The operating system enforces security policies that limit the scope of system calls. For example, the OS can ensure that Alice's programs cannot access Bob's files. Or that untrusted user programs cannot set up trusted services on standard network ports.

The operating system's security is critically important, but it is not always sufficient. In particular, some of the security relevant actions of a program are too fine-grained to be mediated as system calls. And so the software itself needs to be involved. For example, a database management system or DMBS is a server that manages data whose security policy is specific to an application that is using that data. For an online store, for example, a database may contain security sensitive account information for customers and vendors alongside other records such as product descriptions which are not security sensitive at all. It is up to the DBMS to implement security policies that control access to this data, not the OS.

Operating systems are also unable to enforce certain kinds of security policies. Operating systems typically act as an execution monitor which determines whether to allow or disallow a program action based on current execution context and the program's prior actions. However, there are some kinds of policies, such as information flow policies, that can not be, that simply cannot be enforced precisely without consideration for potential future actions, or even non-actions. Software level mechanisms can be brought to bear in these cases, perhaps in cooperation with the OS.

Firewalls and IDS

Another popular sort of security enforcement mechanism is a network monitor like a firewall or intrusion detection system or IDS. A firewall generally works by blocking connections and packets from entering the network. For example, a firewall may block all attempts to connect to network servers except those listening on designated ports. Such as TCP port 80, the standard port for web servers. Firewalls are particularly useful when there is software running on the local network that is only intended to be used by local users. An intrusion detection system provides more fine-grained control by examining the contents of network packets, looking for suspicious patterns. For example, to exploit a vulnerable server, an attacker may send a carefully crafted input to that server as a network packet. An IDS can look for such packets and filter them out to prevent the attack from taking place. Firewalls and IDSs are good at reducing the avenues for attack and preventing known vectors of attack. But both devices can be worked around. For example, most firewalls will allow traffic on port 80, because they assume it is benign web traffic. But there is no guarantee that port 80 only runs web servers, even if that's usually the case. In fact, developers have invented SOAP, which stands for simple object access protocol (no more an acronym since SOAP 1.2), to work around firewall blocking on ports other than port 80. SOAP permits more general purpose message exchanges, but encodes them using the web protocol.

Now, IDS patterns are more fine-grained and are more able to look at the details of what's going on than our firewalls. But IDSs can be fooled as well by inconsequential differences in attack patterns. Attempts to fill those gaps by using more sophisticated filters can slow down traffic, and attackers can exploit such slow downs by sending lots of problematic traffic, creating a denial of service, that is, a loss of availability. Finally, consider anti-virus scanners. These are tools that examine the contents of files, emails, and other traffic on a host machine, looking for signs of attack. These are quite similar to IDSs, but they operate on files and have less stringent performance requirements as a result. But they too can often be bypassed by making small changes to attack vectors.


Heartbleed is the name given to a bug in version 1.0.1 of the OpenSSL implementation of the transport layer security protocol or TLS. This bug can be exploited by getting the buggy server running OpenSSL to return portions of its memory. The bug is an example of a buffer overflow. Let's look at black box security mechanisms, and how they fare against Heartbleed.

Operating system enforcement and anti-virus scanners can do little to help. For the former, an exploit that steals data does so using the privileges normally granted to a TLS-enabled server. So the OS can see nothing wrong. For the latter, the exploit occurs while the TLS server is executing, therefore leaving no obvious traces in the file system. Basic packet filters used by IDSs can look for signs of exploit packets. The FBI issued signatures for the snort IDS soon after Heartbleed was announced. These signatures should work against basic exploits, but exploits may be able to apply variations in packet format such as chunking to bypass the signatures. In any case, the ramifications of a successful attack are not easily determined, because any exfiltrated data will go back on the encrypted channel. Now, compared to these, software security methods would aim to go straight to the source of the problem by preventing or more completely mitigating the defect in the software.

Threat Modeling

Threat modeling is a methodical, systematic approach to identifying possible security threats and vulnerabilities in a system deployment. First you need to identify all the assets in the system. Assets are the resources you have to protect from intruders. These can be user records/credentials stored in an LDAP, data in a database, files in a file system, CPU power, memory, network bandwidth, and so on. Identifying assets also means identifying all their interfaces and the interaction patterns with other system components. For example, the data stored in a database can be exposed in multiple ways. Database administrators have physical access to the database servers. Application developers have JDBC-level access, and end users have access to an API. Once you identify all the assets in the system to be protected and all the related interaction patterns, you need to list all possible threats and associated attacks. Threats can be identified by observing interactions, based on the CIA triad.

From the application server to the database is a JDBC connection. A third party can eavesdrop on that connection to read or modify the data flowing through it. That’s a threat. How does the application server keep the JDBC connection username and password? If they’re kept in a configuration file, anyone having access to the application server’s file system can find them and then access the database over JDBC. That’s another threat. The JDBC connection is protected with a username and password, which can potentially be broken by carrying out a brute-force attack. Another threat.

Administrators have direct access to the database servers. How do they access the servers? If access is open for SSH via username/password, then a brute-force attack is likely a threat. If it’s based on SSH keys, where those keys are stored? Are they stored on the physical personal machines of administrators or uploaded to a key server? Losing SSH keys to an intruder is another threat. How about the ports? Have you opened any ports to the database servers, where some intruder can telnet and get control or carry out an attack on an open port to exhaust system resources? Can the physical machine running the database be accessed from outside the corporate network? Is it only available over VPN?

All these questions lead you to identifying possible threats over the database server. End users have access to the data via the API. This is a public API, which is exposed from the corporate firewall. A brute-force attack is always a threat if the API is secured with HTTP Basic/Digest Authentication. Having broken the authentication layer, anyone could get free access to the data. Another possible threat is someone accessing the confidential data that flows through the transport channels. Executing a man-in-the-middle attack can do this. DoS is also a possible threat. An attacker can send carefully crafted, malicious, extremely large payloads to exhaust server resources. STRIDE is a popular technique to identify threats associated with a system in a methodical manner. STRIDE stands for Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Escalation of privileges.