Wednesday, September 30, 2009

The Cloud goes Postal

The functionality of the US Post Office is now available on a Cloud-based delivery model, courtesy of NetSuite. Check out the story by Maureen O'Gara at sys-con.

The USPS services seem to be deployed as a SaaS model, accessed through a browser. As yet there do not seem to be managed PaaS APIs available (or are there?) which can be pulled into other applications, for example iPhone apps. Of course, right now it's easy to co-opt the Web interface to USPS.com in order to programmatically query Post Office shipping times over HTTP. Although, I can just imagine REST zealots arguing that it should really be termed the "GET Office" in that case [really lame joke, I know :-) ].

Tuesday, September 29, 2009

Vote in the 2009 SOA World Magazine "Readers' Choice Awards"

Check out the 2009 SOA World Magazine "Readers' Choice Awards"

Vordel is nominated in the "Best Integration Tool" and "Best Security Solution" categories for our XML Gateway and the "Best SOA Testing Tool" category (for SOAPbox). This blog is nominated in the site category (under its old XML Networking moniker).

Get out and vote!

Thursday, September 24, 2009

On the agenda at VordelWorld - "When is application architecture cloudy?"

The abstract for the VordelWorld talk by Richard Watson of the Burton Group is now up on the VordelWorld Agenda. VordelWorld takes place in Dublin, Ireland, from 4th to 6th November and focuses on governance for SOA and the Cloud.

-------------

"Cloud Application Architecture: Rebuilding applications for the cloud"
Speaker: Richard Watson Burton Group

The Cloud promises to bring infinite scalability, unlimited availability, and increased responsiveness. Can applications realize cloud benefits through a simple off-premise server migration? Does Cloud require developers to re-write applications or port applications to proprietary Platform as a Service (PaaS) environments?

This session will detail cloud application architecture patterns, cloud application frameworks, portability and migration strategies, and deployment topology considerations. It will answer the following questions:
  • When is an application platform cloudy?
  • When is application architecture cloudy?
  • When is cloud application architecture an appropriate choice?
  • What architecture roadmap should be chosen to make applications cloud ready?

Wednesday, September 23, 2009

The Multi-Domain Registry/Repository

Frank Kenney from Gartner coined the term "Multi-Domain Registry/Repository", or MDRR, in a tweet recently.

What is an MDRR and why is it important? To understand, think of a registry/repository traditionally seen as part of a SOA architecture. It is supposed to include addresses of the services available in the SOA, plus metadata about the services, such as their policies.

Now think about how organizations are starting to rely on Cloud-based services, such as Amazon S3 (storage) and Force.com (sales force automation). These services are not on-premises SOA service, so they are not in the SOA registry/repository. But the organization relies on these services! This means that the registry/repository does not contain a full compliment of the services which are used by an organization.

But wait, you may say, a SOA registry/repository is intended to manage and control services, and you can't control services provided by Amazon or Force.com because they are simply not under your control. This is true, but the goal is not to control the services. The goal is to monitor the performance, availability, and compliance of these external services. Virtual Services are the key to this. Rather than managing the third-party services, you are managing virtual interfaces to those services. The virtual service is provided by a Cloud Gateway which acts as a broker to the services.

These virtual services are included in a Multi-Domain Registry/Repository alongside internal on-premises services (that's why it's "multi-domain"). From an organization's standpoint, all of the services they depend on are in one place. That's valuable.

Governance services, such as services to scrub data of private information, provided by the Cloud Gateway, are also housed in the MDRR. These governance services are another example of on-premises services. This means that a full compliment of services are present. They are shown in the diagram below:


'

Look for the MDRR meme to grow in the months ahead....

And I can't resist the cheesy Hart-to-Hart reference:

On-premises services and
virtual services

When they met,
it was MDRR

Tuesday, September 22, 2009

"...cloud computing is not only the future of computing, it is the present and the entire past".

Check out this great account by Jon Fortt of Fortune Magazine's Big Tech blog on Larry Ellison's appearance at the Churchill Club yesterday evening. The standout quote for me was this one about Cloud Computing:
"Cloud? Clouds are water vapor. My objection to cloud computing is the fact that cloud computing is not only the future of computing, it is the present and the entire past. Google's (GOOG) now cloud computing. Everybody's cloud computing. … All it is, is a computer attached to a network. What are you talking about? What do you think Google runs on? It's databases and operating systems and memory and processors! What are you talking about?"
http://brainstormtech.blogs.fortune.cnn.com/2009/09/22/oracle-ceo-sees-long-slog-for-u-s-economy/

Monday, September 21, 2009

Beyond the Amazon Virtual Private Cloud

Amazon's virtual private cloud allows for Amazon EC2 instances to exist within a VPN environment, managed by an organization's existing network security infrastructure. As Steve Riley defines Amazon Virtual Private Cloud;
Amazon VPC enables enterprises to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection, and to extend their existing management capabilities such as security services, firewalls, and intrusion detection systems to include their AWS resources
http://stvrly.wordpress.com/2009/09/08/what-can-you-do-with-amazon-virtual-private-cloud/
Before Amazon Virtual Private Cloud, Amazon EC2 instances outside the firewall were not under the control of internal network management systems. They were the equivalent of remote workers, cut loose from corporate infrastructure management:



With Amazon Virtual Private Cloud, the Amazon EC2 images can be assigned IP addresses from a range selected by the owner organization. Thus they can be brought into the control of that organization's network management systems, as diagrammed below:



This is definitely a step in the right direction. The clear next steps are:

1) Other Cloud services besides Amazon EC2.
What about the connections to Force.com, or Google Apps? Not to mention other Amazon Web Services services such as Amazon SQS.
2) Governance, including Identity and Access Management.
Network Management defines which computers can talk to other computers. But identity and access management defines which users can use which applications, and how they can use them. This is the realm of products such as Microsoft Active Directory, CA SiteMinder, and LDAP products such as Novell eDirectory. If an organizations wishes to bring their identity management infrastructure to bear on their usage of Cloud services, how can they do this?

To illustrate this, look at the diagram below. We see that network management of the Amazon EC2 service is taken care of by Amazon Virtual Private Cloud. This is now controlled by an organization's on-premises network management infrastructure. But identity and access management of the Cloud services requires a link to the organization's existing on-premises identity and access management infrastructure. Also, while the Amazon EC2 service is within the Amazon Virtual Private Cloud, the other Cloud Services, which are accessed at the API level by API Keys and OAuth, are not:



This means that the organization's on-premises policy-based control is not being applied to all their Cloud-based services. Diving down to the technology, the conundrum is how to translate from the identity tokens used on the network (Kerberos for Windows networking, plus SAML for Web Services) up to the API keys and OAuth used at the Cloud level. This is what would allow existing on-premises identity management infrastructure to control access to Cloud services on a fine-grained level.



The solution is to use a Cloud Gateway. The Cloud Gateway Bridges the connection from the on-premises identity management infrastructure up to the Cloud services. This allows users who access applications locally (or just simply sign on to their PC's) to access Cloud Services, all the time governed under the umbrella of an identity management infrastructure. Rules applied to internal applications, governing who can access which applications and how they can use it, can now be applied in the same way to Cloud-based applications.



The Cloud Gateway allows on-premises Identity and Access Management to govern Cloud usage. This is analogous to how, at a network level, the Amazon Virtual Private Cloud allows on-premises Network Management to manage Cloud connections. Thus a Cloud Gateway compliments and extends the Amazon Virtual Private Cloud. It allows single sign-on from on-premises applications up to Cloud-based applications, and allows an organization's identity and access management infrastructure to be brought to bear on that organization's usage of Cloud services.

Thursday, September 17, 2009

Google: First we take Washington

Leonard Cohen sang "First we take Manhattan". And technology companies sang along: using New York based financial services companies as early adopters of their products and then building out from these beachhead customers. Sun was the prime example. But also think of Check Point firewalls, and of course RIM with the Blackberry. Wall Street customers were a key part of their early revenue, awareness, and indeed contributed to key features in many cases.

But now look at Google with Google Apps. As eWeek reports, Google is building out a Government Cloud service with Google Apps. It is a parallel system to the commercially-available Google Apps. That itself is interesting because Google Apps features multi-tenancy which in theory should have kept government users separate from other others. But clearly nobody wanted to take that chance.

The big story is that Google is using government, not Wall Street, as its beachhead. Where previously a technology company would have used a New York based financial services company as its prime reference, Google is targeting the US Federal Government. It's "First we take Washington", not "First we take Manhattan". And now that Google has a government offering, we see the ripples - like this ZDNet story: "Do you really need Office? Really? If the Feds don't, do we?"

This is part of a larger trend which we have seen first-hand in Vordel. Many branches of the US Government have chosen Vordel for their SOA deployments (e.g. the Federal Aviation Administration: FAA chooses Vordel for SOA work - Government Computing News). Our government customers are a prime reason why our US headquarters is in Herndon, close to Washington. On top of the Vordel deployments for government SOA, we are now seeing a lot of excitement for Cloud services using the Vordel Gateway Cloud Edition. As the VC blogger Jeff Bussgang from Flybridge Capital Partners has put it, "Washington is the New New York". It's where many innovative projects are, and it's where so much potential for Cloud Computing is.

Wednesday, September 16, 2009

Amazon, Burton Group, CA, Oracle, SOA Software, and Three to speak at Vordel's User Conference

There is a great line-up taking shape for the VordelWorld conference in November. Check out the latest conference news below:

---
Vordel, a provider of Cloud and SOA Governance products, today announced the line up for its annual VordelWorld user conference to be held in Dublin, Ireland on November 4-6.

VordelWorld puts the spotlight on SOA and Cloud Governance and presents case studies from leading firms such as Amazon, Bank of America, CA, Oracle, Pfizer, Three, the US Government,and several leading European Telcos and Insurance companies.

Keynote speakers will provide a mixture of strategic and pragmatic advice to enable companies comprehend the issues at play when considering incorporating Cloud Computing services as part of their existing SOA or non-SOA enterprise architectures. This two day event packed full of insightful and thought-provoking content is a firm favorite in the industry calendar and is always a sell out. Keynote speakers at this year's event include:

Steve Riley, Evangelist and Strategist, Amazon Web Services
• Richard Watson, Analyst, Burton Group
• Bill Mann, Senior VP Strategy, CA
• Vikas Jain, Principal Product Manager, Oracle
• Ian Goldsmith, VP Product Management, SOA Software
• Chris Taylor, Lead Architect for Enterprise Portals and Integration, Three

Attendees can also Get Vordel Certified with the Vordel Certified Systems Engineer hands-on training course and obtain a preview into what's coming next from Vordel. These training courses will be delivered in both English and French.

To register to attend the event simply click here. Looking forward to seeing some of you in Dublin in November.

Thursday, September 10, 2009

Ready for IBM Tivoli software - XML Gateway

The Vordel XML Gateway has "Ready for IBM Tivoli Software" certification and is profiled on the IBM PartnerWorld Global Solutions Directory under the "XML Gateway" category. Click on the image below to read more about this partner solution for IBM customers, under the categories of:

Wednesday, September 9, 2009

How to remove WS-Security tokens from a SOAP message

After you've validated a UsernameToken, or checked an XML Signature, it is often good practice to then strip out the WS-Security blocks containing items like tokens and signatures, before sending them downstream to a Web Service. In some cases, you are stripping these out because you don't want the password to remain in the message. In other cases, you may know that the downstream Web Service will choke on the WS-Security block. It also makes the message smaller.

The Vordel XML Gateway ships with a built-in stylesheet for stripping WS-Security blocks from SOAP messages. You can see this in the Policy Library. Simply apply this to a service, put it into a chain to run after you've processed the WS-Security headers, and voila the headers are gone. Grab a copy of the Vordel Gateway from here: http://www.vordel.com/products/vx_gateway/

Tuesday, September 8, 2009

Dead or Alive? There's an API for that

This week's Time Magazine has a piece by Gaelle Faure entitled "How to Manage Your Online Life When You're Dead" which describes what happens to online profiles, Webmail, and social networking data when someone dies. Consider "Deathswitch":

Deathswitch, which is based in Houston, has a different system for releasing the funeral instructions, love notes and "unspeakable secrets" it suggests you store with your passwords and account info. The company will regularly send you e‑mail prompts to verify that you're still alive, at a frequency of your choosing. (Once a day? Once a year?) After a series of unanswered prompts, it will assume you're dead and release your messages to intended recipients. One message is free; for more, the company charges members $19.95 a year.
http://www.time.com/time/business/article/0,8599,1916317-2,00.html

But did you know there is an API for checking if someone is dead or not? It's called the Death Index API, provided by CDYNE against the US Govt Social Security Death Index. Using this API would negate the need for "e‑mail prompts to verify that you're still alive".

Although this is a morbid example, it's a good example of a Cloud API service which can be composed together with other Cloud and on-premises services into banking, credit card application, and insurance applications.

Friday, September 4, 2009

Replay Attacks: Why "If it works twice, then it doesn't work" makes sense

Everyone who has ever performed a demo has had the experience of being asked "Can you send that message once more, I want to see will it work a second time". The viewer wants to see was the first successful run just a fluke, and will the demo still work if you run the message through again.

But when you are demoing message-based authentication, you want to see the message blocked when it is resent. This is because, counter-intuitively, it is the same message which just worked a moment previously.

Why is this? The reason is because of replay attacks. If the same message, containing the same authentication tokens, is re-sent, then that message may have come from someone who has sniffed the traffic of a valid user. This is sometimes called a "capture-replay" attack.

Replay attacks are often much misunderstood. I've seen them confused with DoS attacks (because they involve the replaying of messages, but a DoS attack is much less sophisticated and just uses brute force). I've also seen many cases where an organization wishes to create an authentication policy along the lines of "any messages signed by a trusted user will be let in", which is wide open to a replay attack by anyone who can get a hold of a message signed by one of the valid users.

The key to blocking replay attacks is to ensure that something in the message changes each time. Sometimes this is the timestamp, or the message itself if there is enough variation in messages (the Gateway will keep digests of previous messages and if it detects a collision, blocks the message). But standards such as WS-Security UsernameTokens used by XML Gateways have inbuilt support for blocking replay attacks. Let's take a look at this how an XML Gateway blocks a replay attack:

First we use Vordel's free Web Service Testing tool SOAPbox to put a WS-Security UsernameToken into a SOAP message. Click on the "Security" menu and then "Insert Ws-Security Username" token. Enter the details as shown. I have setup a user called "JoeUser" on the Vordel Gateway with a password which matches the password I am using.



We now see the UsernameToken in the message:



Switch over to Design View, and you see the structure of the WS-Security UsernameToken. Notice the Timestamp and the nonce. You may be thinking "hang on, that's not my password there". But it is a password digest, constructed over the original cleartext password, the timestamp, and the random nonce ("number once") which is created by SOAPbox as per the WS-Security specification.



We send this message through to the Vordel Gateway, and it is passed valid:



Notice the timestamp in the message. If an attacker changes the timestamp, that invalidates the digest. An attacker will not have access to the cleartext password and so can't create a new digest. But what if they simply resend the message while within the timestamp window? Let's see by pressing "Send" again in SOAPbox.

We see that the replayed message is detected and blocked by the Vordel Gateway.



Looking at the Vordel Gateway real-time monitoring reports, we see that one message was valid, and the message other message was blocked:



Looking at the real-time Flash-based traffic reports, we see the blocked message was blocked because of its WS-Security UsernameToken.



You don't have to configure anything on the Vordel Gateway for this. It automatically detects replay attack attempts based on replayed WS-Security UsernameTokens and blocks them.

To test this yourself, grab your copy of SOAPbox here. And get an evaluation copy of the Vordel Gateway here.

Thursday, September 3, 2009

Java developers: How to extend the Vordel XML Gateway using Java

The Vordel XML Gateway ships with a Java SDK so that developers can create new blocks of functionality which run within the Vordel Gateway. Gateway administrators can then deploy these extra blocks of functionality, which surface in the Policy Director as extra "filters" which can be included with circuits running on the Vordel Gateway.

Full documentation of filters, as well as a worked example with Java source code, is available from the Vordel Extranet in the "Extensibility" section.



But what if you are thinking "I'm more of a servlet guy - can I just take a servlet and run it on the XML Gateway?". The answer is "Yes". The Vordel XML Gateway includes a full servlet container, which sits on top of Vordel's core XML Acceleration with its JAXP interface. You can thus take an existing servlet and run it right on the Vordel Gateway, taking advantage of acceleration.


Wednesday, September 2, 2009

Catch PaulDotCom and Alex Horan at OWASP Boston

My pattern of missing OWASP Boston meetings due to out-of-down travel continues next week. And this time the meeting would have been very convenient for me to reach via Commuter Rail near South Station (versus trying to negotiate the awkward Route 128 exit to get to Microsoft's offices in Waltham). It would have been good to have seen Paul Asadoorian (AKA PaulDotCom) speak, and hear from the CORE guys. Anyway, maybe next time.

Details here:
https://lists.owasp.org/pipermail/owasp-boston/2009-August/000175.html

Tuesday, September 1, 2009

Are users to blame for Cloud Insecurity?

In an insightful post, Paul Miller asserts that "Cloud providers’ own systems will tend to be more secure than those that the majority of potential customers have in-house today". You may ask - So why is cloud security seen as a problem then? The problem, as he points out, is the customers who place insecure applications into those pristine, secure Cloud hosting environments:

The customers who open up all the ports you so carefully closed by default. The customers who use ‘password’ as their password. The customers who deploy sloppy code that’s riddled with holes. The customers who, frankly, are just human… and who don’t live and breathe security in the same way that at least someone inside the data centre probably does.
http://cloudofdata.com/2009/08/security-and-the-cloud-will-focus-shift-to-the-customer/

So who is to blame if a customer deploys an insecure application to the Cloud? Can you just blame the customer? It is not as easy as that...

Twelve years ago, I was working for an ISP and one of my jobs was to vet Perl and ASP scripts for security holes. If the ISP let a customer host an insecure script, who was to blame? Or does it even matter who to blame, after the damage is done? If the script could tie up resources on the system and degrade the performance of other customers' applications, then the ISP could not turn around and say "don't blame us, blame that other customer over there with their insecure script". In reality, we would block insecure applications before they were hosted.

The same went for SQL Server applications. Everyone seemed to use "sa" and a blank password. Does this mean SQL Server was inherently insecure? Arguably, yes (well, it was certainly not "secure by default" in those days). But in reality, it was up to the hosting provider to mitigate against the insecurity of applications and the insecurity of code. We could not just say "blame Microsoft".


Multi-tenancy

Although Paul Miller's blog post does not use the word "multi-tenancy", it is one of the keys here. Back in my ISP hosting days, we liked to think that we'd perfected "multi-tenancy", whereby the scripts of one customer could not interfere with the scripts of other customers. However, we still vetted the scripts for security holes.

The Cloud providers seem to take different tacks on this question.

I've noticed that Amazon will examine traffic to and from EC2 instances, and if they see evidence that a machine has been taken over by a bot or trojan, they then block traffic to it. This is like a "quarantine" approach - allow the customer to create an insecure EC2 image, but then detect its behavior and stop it from interfering with other EC2 images.

SalesForce take a different tack. They force developers to proactively add testing code to their applications written in the Apex language on the Force.com platform. Here is the section on Multi-Tenancy in the Force.com "Introduction to Apex":
Multi-tenancy

The Force.com platform is a multi-tenant platform, which means that the resources used by your application (such as the database) are shared with many other applications. This multi-tenancy has a lot of benefits, and it comes with a small promise on your part. In particular, if you write Apex code, the platform needs to do its best to ensure that it is well behaved.

For example, Apex code that simply loops will not benefit you (or the cloud). This is the reason why Apex, when deployed to a production server, needs 75% code coverage in tests.

The Apex runtime engine also enforces a number of limits to ensure that runaway Apex does not monopolize shared resources. These limits, or governors, track and enforce various metrics. An example of these limits include (at the time of writing): the total stack depth for an Apex invocation in a trigger is 16, the total number of characters in a single String may not exceed 100000, and the total number of records processed as a result of DML statements in a block may not exceed 10000.

There are various precautions that can be taken to ensure that the limits are not exceeded. For example, the batched for loop described earlier lifts certain limits, and different limits apply depending on what originated the execution of the Apex. For example, Apex originating from a trigger is typically more limited than Apex that started running as part of a web services call.

http://wiki.developerforce.com/index.php/An_Introduction_to_Apex
So SalesForce are effectively saying "We have a secure environment, and if you want to upload code to run on it, you have to make sure your code has built-in checks to ensure it does not inadvertently monopolize resources". Amazon seem to be taking the alternative approach of discovering if an app is misbehaving, and then quarantining it (of course, as IaaS, EC2 is quite different from Force.com which is PaaS, so their approach is different). It remains to be seen what approach is best. Customers can hedge their bets by using a Cloud Gateway to secure their data in the cloud, keeping it encrypted and signed, so that it is safe even if other apps in the multi-tenant environment misbehave.