Tuesday, June 21, 2011

How to enable Service Virtualization across hosts

One of the core patterns for a Gateway is "Service Virtualization". Service Virtualization means that an organization can expose virtual services in front of its infrastructure. These virtual services can take the form of lightweight REST APIs or heavyweight SOAP Web Services. The Service Virtualization pattern enables you to do neat things, like expose a REST service in front of a SOAP service, and convert REST to SOAP dynamically at the Gateway. You can also use the Gateway to deploy a virtual service in front of a database, or a message queue, or an ESB.

But how does it work? The answer comes down to how the virtual service is advertised to the client. Remember that service interfaces are generally advertised using WSDL (and as of WSDL 2.0, this applies to REST API interfaces as well as SOAP). WSDL includes the address of the service provider host. When the Gateway exposes a virtual service, it must replace this address with the address of the Gateway. Otherwise, clients would simply try to connect to the back-end service, thus attempting to bypass the Gateway.

Here we see an example where the client is pulling down the WSDL of a virtual service from the Vordel Gateway. Notice that the address of the service has been changed to the address of the Gateway:

But what if a client from the outside world accesses the virtual service, via a public Fully-Qualified Domain Name like services.mycompany.com ? Will the WSDL still say "VordelGateway" in it? If so, this would not work.

A neat feature of the Vordel Gateway is that it dynamically virtualizes its services based on how the client calls it. So, when we call the virtual service using the hostname services.mycompany.com , this is what happens:

Notice that the Vordel Gateway has dynamically virtualized the service with the hostname used by the client. If we'd pulled down the WSDL by its IP address, it would have placed the IP address in there. This is a very neat feature.

The SSL-savvy of you may be thinking "hmm.... those WSDL addresses use SSL but that's going to throw a warning if the hostname changes, and it'll also cause some Java clients not to connect". Well, that points to another neat feature that enables Service Virtualization. The Vordel Gateway implements SSL Server Name Identifier (SNI) which means that when it's called using a particular hostname, it will dynamically use the appropriate SSL certificate (and private key) for that connection. If you right-click on an SSL interface in Policy Studio, you can see this:

Notice in the screenshot above that there are two certificates set. Both must have corresponding private keys (since that's essential for SSL). When the Gateway is called using the name "vordelgateway", it assumes identity "CN=VordelGateway" (CN means "Common Name", in X.509 Certificate jargon). When the Gateway is called using "services.mycompany.com", like in the second screenshot above, it assumes identity "CN=services.mycompany.com". This is all done on the fly. Without this feature, many clients would not connect because the SSL certificate would not match the hostname. But with this feature, it "just works".

For more info, you can register for a live demo of the Vordel Gateway at: http://www.vordel.com/demo.html

Monday, June 6, 2011

The value of an Audit Trail for blocked REST API calls

[ Update: Axway acquired Vordel in 2012 and the new name for the Vordel Gateway is the Axway API Gateway ]

An often-overlooked aspect of security is the Audit Trail. In the case of a REST API, we want to know not only that a REST API call was blocked, but why it was blocked.

Let's take a look at the Real-Time Monitoring from the Vordel Gateway, deployed to manage a REST API. We see the orange spike indicating that an API call was blocked:

The key to looking up the Audit Trail is the message ID. Vordel users will be familiar with this ID as the ${id} attribute which is automatically created for each message in the Gateway. In this case, I highlight the message ID for the offending message and copy it:

Then I tab over to the Audit Trail and paste the Message ID into the search form:

When I press the Search button, I can see the message content, including the SQL Injection attempt which I have circled. The Vordel Gateway detected and blocked this attack against the REST API.

So, it's important to know not only that a REST API call was blocked, but why it was blocked and what the REST API call actually was. This is the value of an audit trail. In addition, the Audit Trail logs may be signed, and the key used to sign them may be stored on a HSM (Hardware Security Module). All of the screenshots were taken from the Vordel evaluation image, which you can request from info@vordel.com

Sunday, June 5, 2011

The REST Doggy Door

It's often a good thought experiment to read something with a "developer" hat on, and then read the same thing with a "security" hat on. There is a classic example of this in a comment today on the Service Oriented blog:
REST is so simple to implement, that its like a doggie door... something that will let anything in, when you want to provide open interfaces. When you don't know what you're going to be hooking up, REST is good!
A REST API is indeed a great way to allow a multitude of devices and apps to consume a service. Almost any app can create a simple HTTP GET and pass some parameters with it because, as people say, "it's just a wget".

But wait, what about security? In the doggy door analogy, what if a snake or, if you're in Florida, a lizard comes in through that doggy door? In the REST world, the snakes and lizards are malicious users, who want to use REST services for data-mining or denial of service. For this reason it is important that REST APIs are protected and managed. However, back to the doggy door analogy, it is just as important to not make that doggy door so complex that the dog gives up and goes elsewhere. In that case, you'd be locking the dog out with the snakes and lizards.

For all these reasons, REST API management has to enable the right apps and devices to connect, without placing onerous requirements on them. Remember that REST exists to be easy to use, so if you force clients to suddenly place honking great security assertions into each request, they will be turned off.

So what is the solution? I've written about the options for securing REST API's before, and I recommend checking out this 40-minute video explaining how REST APIs can be deployed safely. The key is to choose an authentication scheme which can be supported by the widest variety of clients, leveraging open standards and best practices. Wind the video on to minute 20 to see how the "REST doggy door" can be secured.

Saturday, June 4, 2011

Projecting Identity to the Cloud - Cloud Expo New York

I'm on the other side of the world, working with partners and customers this week on some pretty exciting stuff, so my colleague Isabelle Mauny will be ably giving the presentation on Single Sign-On to Cloud Services over in New York at the Cloud Expo.

So what is the reason for Single Sign-On to Cloud services? It's all part of "Bring Your Own Identity" (BYOI). BYOI is a major trend for Cloud services. Witness the many "Log in with Facebook" and "Log in with Google" buttons on sites like TripIt. In the enterprise, it's about "Identity Projection" where users log in as usual (e.g. with Active Directory, or to a corporate portal) and then are seamlessly logged into Cloud-based services such as a corporate Google Mail account. This means projecting your corporate identity up to the Cloud service. It's "Bring Your Own Corporate Identity". And Single Sign-On is what enables this.

The most obvious benefit of this is that is saves the user the hassle of keying in another password. That is a good benefit, but there are a lot more:

- As Nik Cubrilovic put it in his detailed treatise on the "The Anatomy Of The Twitter Attack", "Bad human habit #1: Using the same passwords everywhere. We are all guilty of it." If you ask users to log in to multiple services in order to get their work done, they will most likely use the same password everywhere. This provides an attacker with a "find once, use anywhere" approach to passwords. But if Single Sign-On is used, no password is ever sent up to the Cloud service. This is all part of the trend to minimize password use, for good reasons.

- It is costly to manage all those passwords. Over the years, it has been proven that password resets cost a lot of money. They waste productivity (users can't get to the information they need for their work), and tie up IT helpdesk people. As mentioned in the point above, in the Cloud world all those password resets create a security threat.

- Agility. The word is over-used, but in the case of Single Sign-on to the Cloud, it means that new Cloud-based services can be brought on-stream for employees (TripIt for travel management is a good example), without having to provision all those employees with new passwords.

I think that this "Projection of identity to the Cloud" is going to be an important topic going forward. The session is at 3.15pm on Wednesday June 8th at the Javits Center.