Links-1

[Posts titled “Links” will contain interesting articles, write-ups that I found very engaging and useful.]

I love to read technical articles that are not too nerdy but accessible to the reader in gentle way. I scourge for such articles on topics of my interest. May be it is to get a a  initial grip on what the subject is all about and also to revisit at times to refresh my memory. One such website is by Gustavo Duarte. It is go-to site for gaining a solid beginner level understand on computer internals, especially on Linux.

Restart

This is meta post. So, be warned. This blog was started early this year. I had the intention of maintaining it has a technical blog, particularly on network security. After few posts the blog become inactive. Though, I wished I post some entries, I wasn’t forcing myself into it. Now with the close of the year nearing, I have decided to to give another try. I have renamed the blog. Not sure how good is it. But that’s what I could settle with after intense thinking. I have also decided to make this blog a generic one instead of confining it to technical writings. I have a a place at Medium where I have been posting few things( irregularly, yes). May be I can have things in one place. Let see.

So, here’s the restarting my attempt to get better in writing. I hope I run the engine without halting and not post one more meta entry on blogging itself.

IPEK’s Role In DUKPT

My reply to a question of IPEK’s role in the DUKPT mechanism in Security Exchange:

http://security.stackexchange.com/questions/56414/what-is-the-point-to-the-ipek-in-dukpt/56415#56415

Derived Unique Key Per Transaction (DUKPT) is a key management scheme in which for every transaction, a unique key is used which is derived from a fixed key. Therefore, if a derived key is compromised, future and past transaction data are still protected since the next or prior keys cannot be determined easily. DUKPT is specified in ANSI X9.24 part 1

The key aspect of DUKPT is that for each transaction that is originated from the PIN device(like the POS terminals), the key for encryption shall be unique. The key shall not have any relation with the keys that were used in the past or the keys that might be used for future transactions. The encryption algorithm that shall be used is TDES.

IPEK is derived from Base Derivation Key(BDK). The inputs to create IPEK are the PIN Device ID and the Key-Set ID. The Key-Set ID uniquely identifies the BDK. So you can see that, with one BDK, you can have multiple IPEKs. One IPEK for each device with unique ID. The BDK shall not be known to the PIN device. The BDK is a super secret key which shall be known to the gateway with which all PIN devices communicate. The gateway shall store the BDK securely in a HSM device. The BDK cannot be shared among the PIN devices. Period. Thus comes the IPEK to rescue.

Once the terminal has been initialized with IPEK, it shall populate the 21 Future Key registers by invoking a non-reversible transformation process. The inputs to this would be IPEK and a value which is function of the register number. Then the IPEK is discarded. Now the terminal has 21 Future Keys stored in 21 registers. Now PIN device can communicate witht the gateway, encrypt with the generated key along with meta data which includes Key-Set ID and device ID. With this meta data and the BDK , the gateway also shall derive the key for decryption.

Core Dumps

In the last post I had discussed about the Heartbleed defect in the OpenSSL. The vulnerability opens up a method by which in a hacker can over a TLS connection with a server retrieve sensitive information like the private key.The server can be made to emit its memory contents over the wire without it being aware of it. There were successful attempts in finding the full server key by using this vulnerability. It is clear that if an application can  hold sensitive information in memory, it should be taking steps to prevent it from being visible.

One way in which a snapshot of memory of an application can be obtained is by causing the application to crash and have the core dump of the process. The crash can be a due to a genuine bug or the application could have received intentionally created signals that causes process crash. So if your application potentially posses sensitive information in memory, you may want to prevent that data being written do disk on a program crash. An attacker can use debugging tools like gdb to analyze the core dump and can use the information.

So, how do we prevent the dumps on process crash? Consider the program with a very nasty bug as shown below. Assume this to be sophisticated program which handles sensitive information like passwords.When this program is run, for obvious reasons it is going to crash and dump core.
#include<stdio.h>

int main()
{
    char password[] = "secret";

    int *tmp = NULL;
    *tmp = 100; //Cause Crash

    return 0;
}
Running the core dump through gdb, we see, as shown below that we are able to see the contents of the char array which in our case holds some password which by definition should be not be known to to all. Dumping core also has another effect. In case of large programs, core dump can be huge and can take up a sizable disk space which can hamper system in other ways. So the best option is to disable dumping of core on process crash.
4
On Linux systems, we can use the ulimit option to prevent the core dump as follows:
 ulimit -c 0 // This essentially says that the core file size is zero bytes.
But an attacker may be able to manipulate the program’s run time environment such that the above can be overridden.
#include<stdio.h>
#include<sys/resource.h>

void spc_limit_core(void)
{
    struct rlimit rlim;

    rlim.rlim_cur = rlim.rlim_max = 0;
    setrlimit(RLIMIT_CORE,&rlim);
}

int main()
{
    spc_limit_core(); // Limit the size of the core.

    char *password = "secret";

    int *tmp = NULL;
    *tmp = 100; //Cause Crash

    return 0;
}
By setting the RLIMIT_CORE to 0, we prevent the process from from leaving a memory snapshot onto the disk. We will have to have the call  spc_limit_core() at very beginning of the program before anything else can come into memory.One disadvantage with preventing core dump is that it makes debugging process crash difficult. No forensic clues are left behind for us to analyze the cause for the crash.  This can be overcome by allowing core dumps only when run in a controlled manner. A debug version of the application can be separately built. This build shall have the compilation option to enable or disable the call to spc_limit_core(). This can be done by enclosing the call within a preprocessor directive as shown below:
#ifndef DEBUG
    spc_limit_core();
#endif

Heartbleed Defect in OpenSSL

One of the recent vulnerability discovered in OpenSSL is the defect named ‘Heartbleed’ bug.The defect was quiet serious and understandably created high visible discussions. And of course there was an immediate patch from OpenSSL as well.

So, what is Heartbleed? 

There is a mechanism within the TLS protocol called heartbeat exchanges. Client and server communicating over TLS can keep a check on the peer availability by sending a heartbeat message and waiting to see if it is reciprocated. This is akin to the Keep-Alive mechanism that the TCP layer provides. Now, why should one have a keep alive mechanism in TLS layer when TCP is already having one is a good question. Its more useful in case of DTLS. The question gets discussed well here. Heart beats can serve the mechanism of having some activity on the TLS pipe to avoid disconnection due to firewalls not liking inactive connections. If it were not to be in TLS layer than applications will have to own the burden to keep the connection intact against watchful firewalls.

In simple terms, any entity – client or server can send a heart beat message and wait for acknowledgment. The message primarily contains a payload of certain length along with 2 bytes indicating the payload length. The receiver of the heart beat request should respond by echoing the payload.Now, the implementation of the ‘echoing’ part was such that the payload length in the request is looked upon, memory is allocated for that size and the echo of the payload is done. This is all fine under normal circumstances. The issue is what if the sender sends a message as shown below:

Payload Length – 10k Bytes Actual Payload length – 10 bytes

As per the implementation logic, 10K bytes get allocated, Memory copy of received payload is made for payload length. But what was received was only 10 bytes. From where will the remaining bytes come from? Whatever is the state of the allocated buffer is returned along with the payload. And it is likely that portion of memory can contain sensitive information like the private security key. We know the private key is the ultimate one in PKI that cannot be compromised. Since the length field is 2 bytes, up to 64k of memory gets exposed.

The failure to check for the message length against what was actually received – missing the bound check – is obviously a serious concern. One could exploit this to extract sensitive information by working over the mechanism intelligently. What’s more, once the defect was known there were challenges to use the heartbleed defect to extract the private key of a server and successful attempts made to fetch the server key.

How do we see Heartbleed in Action?

OpenSSL versions from 1.0.1 through 1.0.1f carry the defect. Versions from 1.0.1g carry the fix.

So, lets run the OpenSSL’s s_server and s_client utilities and see the bug in action. We will modify the code and have the client send in the invalid heart beat request message. We will encode length bytes with high value but send in fewer bytes in reality. The defective server should return more than what is needed. We can create memory dump of server on the client side!

Download version 1.0.1b. Have the source code compiled and without any changes lets run the server and client.

./openssl s_server -accept 5000 -debug

 ./openssl s_client -connect 127.0.0.1:5000 -debug

When the client is run a successful TLS connection is established. In the client session type ‘B’. This results in client sending out a heart beat request to the server. We can see server replying with heart beat response. Essentially the request is echoed. Same number of bytes as in request can be seen in response. (Note that the request and response messages are encrypted)

1

Now, lets tweak the OpenSSL code a bit. We will put in incorrect value in the length field of the heart beat message.

The code change is shown below:

2

After the code change, let’s compile and repeat the test.

3

Do we notice a difference in the response from the server from the previous attempt. Yes. The server is responding with 1072 bytes! And all these bytes are actual dump of the contents from working memory. This is the heartbleed.

If we were to use the OpenSSL version, 1.0.1g, we wouldn’t see such a behaviour from the server. We can see the fix here

On hindsight it does appear as a trivial miss of a regulatory principle in defensive coding. It happens! But has the potential to cause immense damage.