Objective 11.2: Given an architectural system specification, identify appropriate locations for implementation of specified security features, and select suitable technologies for implementation of those features
- java.security and javax.crypto packages – contains standard algorithms but no encryption due to US export laws.
- New in Java 1.1
- Symmetric encryption – faster than asymmetric, single key to en/decrpt, however need to pass the same key to both parties, example: DES.
- Asymmetric encryption – public/private keys, no need to pass the same keys so safer than Symmetric but slower, encrypt with the receiver’s public key, example: RSA.
- Session Encryption – PGP, a hybrid of asymmetric and symmetric. It has the connivence of asymmetric and the speed of symmetric. PGP creates a session key, which is a one-time-only secret key. This session key works with a very secure, fast conventional encryption algorithm to encrypt the plaintext; the result is ciphertext. Once the data is encrypted, the session key is then encrypted to the recipient's public key. This public key-encrypted session key is transmitted along with the ciphertext to the recipient. The recipient's copy of PGP uses his or her private key to recover the temporary session key, which PGP then uses to decrypt the conventionally-encrypted ciphertext.
- Message Digest – one-way function to get a hash-value of a msg, used to create a ‘finger print’ to check data integrity, Message Digest Functions - eg: MD4/5.
- Digital Signature – encrypted message digest using private key, sent with msg to ensure the message hasn’t been changed. Also known as a MAC – Message Authentication Code.
- Digital Certificates – A digital certificate is a message that is signed by a certification authority that certifies the value of a person or organisation’s public key. Contains names of entity for whom certificate was issued- the ‘subject’, public key of subject and digital signature of CA to verify the certificate.
- Encryption algorithms are based on mathematically difficult problems - for example, prime number factorisation, discrete logarithms. Elliptic curves can provide versions of public-key methods that, in some cases, are faster and use smaller keys, while providing an equivalent level of security. Their advantage comes from using a different kind of mathematical group for public-key arithmetic
Asymmetric Steps
- i. Sender calculates the message digest using a message digest algorithm.
- ii. Sender encrypts the message using the public key of the reciever.
- iii. Sender encrypts the digest using the sender private key.
- iv. Sender sends the message and digest to the receiver.
- v. Receiver decrypts the message using the sender’s private key.
- vi. Receiver calculates the message digest using the same message digest algorithm as before (i).
- vii. Receiver decrypts signature to get message digest using sender’s public key.
- viii. Receiver checks the digests are equal
- ix. If digests equals then use the message..
Benefits
- i. Unforgettability – only signer has private key.
- ii. Verifiability – public key available so anyone can verify message.
- iii. Single-use – signature unique to each message.
- iv. Non-repudiation – only the signer has his private key so only she could have signed it thus proving it was from her.
- v. Sealing – digitally sealed, message cannot be altered without invalidation.
Port 443, secure sockets layer, sits on top of TCP/IP layer and below application layer (eg: HTTP, LDAP, IMAP),
can use a variety of encryption algorithms, detect modified or inserted data in messages.
SSL Server Authentication – confirm the identity of the server using standard public-key cryptography to check the server’s certificate and
public id are valid and have been issued by a CA that the client trusts.
SSL Client Authentication – the same as server but to check the identity of the client (optional).
Encrypted SSL Connection – all information between parties is encrypted to provide confidentiality and temper detection.
Byte code verifier to check language safety constraints; access controller to check methods on stack against permissions (has a ‘doPrivileged’ method to allow code to do things outside the sandbox if they are allowed via primissions); Security Manager; Class Loader and java.security package.
cryptographic algorithm. The SSL protocol supports the use of a variety of different cryptographic algorithms, or ciphers, for use in operations such as authenticating the server and client to each other, transmitting certificates, and establishing session keys. Clients and servers may support different cipher suites, or sets of ciphers, depending on factors such as the version of SSL they support, company policies regarding acceptable encryption strength, and government restrictions on export of SSL-enabled software. Among its other functions, the SSL handshake protocol determines how the server and client negotiate which cipher suites they will use to authenticate each other, to transmit certificates, and to establish session keys.
The Diffie-Hellman key agreement protocol (also called exponential key agreement) was developed by Diffie and Hellman [DH76] in 1976 and published in the ground-breaking paper "New Directions in Cryptography." The protocol allows two users to exchange a secret key over an insecure medium without any prior secrets. The Diffie-Hellman key exchange is vulnerable to a middleperson attack. This vulnerability is due to the fact that Diffie-Hellman key exchange does not authenticate the participants. Possible solutions include the use of digital signatures and other protocol variants.
- java.security.PriviledgedAction – Interface for an action that needs security.
- java.security.PriviledgedExceptionAction – Exception from priviledged action.
- java.security.GuardedObject – used to wrap an object you intend to guard.
- java.security.CodeSource – location (URL) and certificate(s) used to verify signed code from the location.
- java.security.KeyPairGenerator – Factory for creating private and public keys.
Database of ‘keytool’, stores private and public keys, each entry has: alias, 1 or more certificates, optional private key protected by a password.
Authentication - based on Pluggable Authentication Modules (PAMs) with a framework for both client and server, uses a LoginContext class that has life-cycle methods called as part of authentication (two phase commit and chain of responsibility pattern). Authorization – based on the notion of classes being members of protected domains, theads of execution each having an Access Control Context object and an authorisation policy that is enforced by an AccessController.
New for Java 1.2, in Java 1.0 all remote code was un-trusted and ran in a sandbox with no way to get out, in Java 1.1 signed remote code was trusted and unsigned was un-trusted, the signed code could do anything, the unsigned code was in the sandbox again, in Java 1.2 (with JCE) the introduction of Policies allowed code to be restricted to a finer degree, it could widen the sandbox. JCE also introduces interfaces for certificates, in particular the X.509 v3 certificate implementation. Sun provides some encryption algorithms but you can get others from other providers.
Defines a set of APIs for using SSL and other technologies like SSL. There’s a reference implementation in the packages.
Zone of computers between private network and outside public network, prevents direct access to private network, typically has the companies web pages.
The basic security functions of any firewall are to examine data packets sent through the firewall, and to accept, reject or modify the packets according to the security policy requirements. Also they hide the network topology. There are different types:
- Packet Filters – all traffic from un-trusted to trusted areas are passed through the filter. It inspects the packets and rejects or allows them based on rules. The rules typically deal with TCP/IP – TCP contains headers with source and destination ports, and IP contains headers with source and destination addresses. So they can port and address filter.
- Stateful Packet Filters (SPFs) – similar to the above but takes into account some details of the TCP/IP rules. Attackers cannot send packets that fraudulently appear to part of an existing connection. Bigger problem with UDP.
- Proxies – break up connection from client to server. To server they deal directly with client and vice versa. However, a proxy can check legality of data, eg: GET command before passing it on. Also it can increase bandwidth and response times by caching data.
So filters tend to look at protocol level information where as proxies can look deeper at the contents of the data flow. Filters are less memory and CPU intensive and less complex (easier to manage, configure, get audit info from). Could use all three firewall strategies – known as ‘defence in depth’.
Constructed on top of a public network but use encryption and other security mechanisms to ensure authorisation and data integrity, handy and cheap to use internet or other existing public networks, tend to be proprietary software/hardware to achieve but new spec IP Security (IPSec) is part of new IP 6.0. Typically there are encryption when data is sent to the public network and decryption when it is removed (software that is part of a company’s firewall). The applications that use the VPN as unaware of the encryption going on.
Victim hosts are hosts which face security threat from outsider since their IP addresses are exposed. To prevent this, you need to install firewall or packet filtering router to restrict access from outside (eg: close telnet port, disable broadcast forwarding).
Smuf-ing – send PING to Internet broadcast address and route the replies to a victim thus flooding them with useless traffic.
To know who has been allowed into your system, vital to keep these logs secure.
Round robin DNS: URL -> list of IP addresses, choose each in turn but need to set life of caches (Time To Line – TTL) low enough to get good balancing but not too low for too much DNS access and make it slow. However, doesn’t cope well if one machine dies, it will still route some of the traffic to the dead machine, also no support for server affinity. Because DNS RR doesn’t take a metric such as CPU usage into account it is considered to be a load sharing solution. Alternatively, reverse proxy that the DNS points at. It is a single point so easy to configure, easy to get logs, complete control therefore server affinity, copes with crashes but more complex and single point of failure.
Server Affinity or Sticky Load Balancing – route requests from the same client to the same back-end machine. This is necessary if you use Stateful Session or Entity EJBs. There can be only one valid server to pass subsequent requests to. Load distribution with Server Affinity recognises that multiple server are acceptable targets for requests but it also recognises that some request are best directed to a particular server. Weak affinity – attempts to enforce the affinity, but not always guaranteed. Strong affinity – guaranteed that affinity is respected.