Thursday, July 30, 2009

Comic-Con and Burton Catalyst - The crossover

Comic-Con was in San Diego last week; Burton Catalyst is in San Diego this week. So San Diego's Gaslamp District was traversed by comic book geeks last week, but this week the same bars and hotels are populated by identity management geeks. But what about the crossover between the two groups? This poster in a Gaslamp District cuts across both: Choose your identity

Wednesday, July 29, 2009

Connecting to the Cloud


Today we at Vordel released Version 5.2 of our Gateway, which connects to Cloud Services.

If you need to connect your systems to SalesForce.com, Amazon S3 or SQS or SimpleDB, or Microsoft or Yahoo! Web Services, with security and without writing code, then this is the product for you.

Here's the release:

Vordel connects SOA to the Cloud
Vordel 5.2 release delivers control and visibility of SOA and Cloud services

San Diego, CA - July 29, 2009 - Vordel, the XML networking management company, today announced at the Burton Catalyst Conference the general availability of Vordel 5.2, its suite of application networking products. Vordel 5.2 enables enterprises to connect their applications to Cloud services such as Amazon Web Services, SalesForce.com, and Microsoft Azure. Vordel benefits enterprises by resolving the complexity of connecting SOA and Cloud-based services and providing a secure and transparent environment for this process.

To successfully transition and adapt to a cloud environment, organizations must implement the core tenets of a governance process. Vordel 5.2 governs the deployment and usage of services deployed either in a SOA or Cloud-based environment. Central to 5.2 is the Vordel Gateway, which acts as the focal point for enforcing policy and auditing usage of services, and that is independent and transparent to cloud providers

Vordel CEO, Vic Morris, speaking about the announcement at the Burton Catalyst Conference said "The rules for Governance of SOA and Cloud computing are very similar; enterprises need to control access by end users of their services, monitor the service usage and guarantee service availability."

"The economies of using Cloud Computing are compelling. But, without governance in place, Cloud Computing is a false economy. With Vordel 5.2, CIOs can maintain full visibility and control of service usage whilst leveraging the efficiency and cost savings of Cloud computing. It also appeals to those developers and architects who need a fast and secure way to integrate Cloud-based services with their local SOA applications," added Morris.

What's new in 5.2?
Key features of Vordel 5.2 include:-

  • Out of the box connectors to Cloud providers including Amazon Web Services
  • API Key support
  • Negotiation of SalesForce.com Session IDs
  • Automatic WS-Policy advertisement and consumption
  • Greatly enhanced visibility and reporting of Services usage via Vordel Reporter
  • New free edition of the latest Vordel SOAPbox Web Services stress and security testing tool
  • Integration with Oracle Access Manager and Enterprise Manager
  • Support for TIBCO Rendezvous and EMS
  • Now available as an Amazon EC2 Image in addition to traditional appliance and software options on Windows, Linux and Solaris

This announcement supports Vordel's strategy of delivering high-performance application networking solutions built on standards-based XML technologies.

About Vordel

Vordel is an XML network management company that provides high performance, enterprise-level hardware and software products to enable enterprises confidently deploy SOA and Cloud-linked applications. Vordel accelerates, manages and protects XML applications to enable enterprises govern their Web Service and Cloud service usage and ensure service performance. For more information visit http://www.vordel.com

Press contact

press-info@vordel.com
+353 1 234 2500

annemarie@returnpr.com
+ 353 86 834 9329

Overheard at Burton Catalyst

"REST is just the HTTP GET based interop which we used to do 20 years ago".

Well, maybe not 20 years ago, but in many ways there is nothing new about REST. For me, it goes back to a 1996 article by Jon Udell in Byte Magazine entitled "On-Line Componentware". You can read the article via Archive.org - at http://web.archive.org/web/*/http://www.byte.com/art/9611/sec9/art1.htm . The article described calling AltaVista using a Perl Script. It's an issue with "Java Chips" on the cover, and it's sitting in a box in an attic in north Dublin - stuff I never got around to bringing over to Boston. Jon Udell's column talked about calling a URL programmatically by passing parameters in a Query-String, and then parsing the HTML results into Perl variables. Simple in retrospect, like a lot of good ideas, but it was revolutionary to read that every website is a software component and that "A powerful capability for ad hoc distributed computing arises naturally from the architecture of the Web". Websites become APIs. For me, this idea provided the ideas behind projects I implemented for the Irish government Revenue service (equivalent of the US IRS).

Fast forward to 2009, and Cloud Services are presented as APIs, with API Keys, called using REST patterns. It is reasonable to say "there is nothing new here". Look at how Bing exposes a Web API. It's very similar to how Jon Udell was calling AltaVista in 1996, except with the addition of the API Key in order to deter abuse.

One new thing is that there is now SOAP as well as REST. But even there, it shouldn't be a case of "one or the other". If you want to bridge REST to SOAP-based Web Services, check out this post on converting REST to SOAP using the Vordel Gateway.

Tuesday, July 28, 2009

Service Modeling - Don't build a folly

The art of service modeling and Saint Anne's Park on Dublin's Northside, together at last in this blog post by Richard Watson. He explain how incorrect service modeling can result in a "folly", like the folly building in Saint Anne's Park in Dublin. I'm looking forward to Richard's talk at Burton Catalyst on service modeling on Wednesday afternoon at 4.10pm.

The abstract for Richard's talk explains that:
Service design principles, such as clean separation of concerns and loose coupling, should guide our service modeling. The need to apply these design principles is only heightened if you surrender control over location, implementation, or quality of service to an “externalized” source, such as a cloud or SaaS provider.

Ironically, it is difficult to provide a link to the description of Richard's talk on the Catalyst website because the site's use of a Flash-based schedule means that they "surrender control over location" of the individual URLs for the talks. So it's difficult to post a link to a Catalyst talk in a blog, or indeed Twitter. But, using Firebug, you can work around the Flash and get the URL for Richard's talk, which is:
https://burtongroup.wingateweb.com/us09/scheduler/weekatglance/scheduleItemDetail.jsp?sess_id=1831

Do this mean that the Flash interface on the Catalyst site, which masks the URL-addressable locations of the individual talks (making it tricky for bloggers or Twitterers to link to them) is a "folly"? :-)

Friday, July 24, 2009

Vordel Analyst Paper on Policy-Driven SOA

Following the ZapThink event yesterday evening, this morning I re-read the ZapNote paper on Vordel. It focuses on policy management, and asserts that "policies are as important to SOA as Services themselves, and they should be managed throughout their lifecycles as such".

Check it out by clicking on the image below:

>

Thursday, July 23, 2009

An Irish wake for SOA?

In MJ O'Connor's Pub in Boston this evening, ZapThink is organizing a gathering to discuss SOA. The attendees include Anne Thomas Manes who famously pronounced SOA dead in her "SOA is Dead" post (by the way, it's fun to type "SOA is" into a Google search and see what helpful drop-down hints Google provides for you). Other attendees include Dana Gardner, Brenda Michelson, Sandy Rogers from IDC, and Dave Chappell from Oracle.

SOA as "architecture for the sake of architecture" may rightfully be dead, but many organizations are happily connecting systems together using services (remember, the second part of "SOA is dead" is "Long live services"). Check out our Google Map of Vordel customer case studies. Many of those customers are expressly not doing SOA, but all are very successfully connecting systems together, using service patterns.



So, with the free drink and the conversation about "SOA is dead", is the event tonight just like an Irish wake for SOA? Well, if it is, then the conversation may go like this:

---
(First drink consumed)

You know I always liked SOA. I wouldn't hear a bad word said against SOA.

(A few more drinks consumed)

You know what always annoyed me about SOA - all this high-falutin' talk for the sake of it. I always wished SOA would just focus on getting stuff done, not all this talk. To tell ye the truth, I always preferred the services over SOA.

(many more drinks consumed)

It moved! I'm sure I saw SOA move there! Maybe SOA is not dead after all!

------

:-)

Wednesday, July 22, 2009

XML Gateway case studies around the world

Check out Vordel's global case studies page, in all its Web 2.0 Google Mappy goodness. Examples include Allianz, Fortis, the Spanish Government, British American Tobacco, and Mazda.

Monday, July 20, 2009

How to create a WS-Security UsernameToken without a password

Scott Hanselman had a recent blog post about how a client asked him to create a WS-Security UsernameToken without a password in order to send what amounted to Web 2.0 style "API Key" within a SOAP message.

I've seen the "API Key" pattern also crop up in SOAP messages before. It's reasonably common practice to use a WS-Security block as the way to encapsulate the API Key, rather than, say, coming up with a new custom token.

The problem which Scott ran into was how to create this password-less WS-Security block using WCF. This is something that's easy to do with the SOAPbox testing tool - here is how:

First load in your SOAP message (or generate one from WSDL). Next, in the "security" menu, choose "Insert WS-Security Username":



You'll notice that you have the option to include a password, or not include a password, by selecting the checkbox:



Here we've chosen not to include a password. When we look at the message in the design view in SOAPbox, we see the WS-Security token there, but no password:



The neat thing about SOAPbox is that the configuration carries over to the Vordel XML Gateway. So the same screens which you use to configure SOAPbox are used to configure the Vordel Gateway. Let's say you want to deploy the Vordel Gateway to create the SOAP messages containing API Keys in password-less WS-Security UsernameToken blocks, as in Scott Hanselman's example. Basically you are converting from a HTTP GET (REST) to SOAP (I've written about how to do REST-to-SOAP conversion with the Vordel Gateway before) while taking the API Key from the HTTP headers and dynamically inserting it into a UsernameToken. In this way, you can support "traditional" browser clients sending API Keys, while also supporting the back-end Web Service which requires this non-traditional usage of a UsernameToken to encapsulate an API Key. This is shown below:



So, an XML Gateway is a way to support this type of "API Key in UsernameToken" case, while SOAPbox is a way to test it from the client-side perspective.

Get more information on the Vordel Gateway here.
...and Grab your copy of SOAP box here.

Friday, July 17, 2009

Oracle-Vordel Presentation now online

The Oracle-Vordel presentation from RSA 2009 is now available

The 53-slide presentation includes:

- Reference Architecture for XML Gateways and Endpoint Agents working together
- 5 Real-Life Case Studies
- Lessons Learned

Enjoy!

Thursday, July 16, 2009

Pandora: When an API doesn't make sense for an online service

A few days ago I wrote a piece on testing Bing's search API using Vordel SOAPbox. Since then, I've tested some other APIs (URL/REST based, SOAP, and JSON) using Vordel SOAPbox. One which sprung to mind was Pandora. Although it's relatively unknown outside of the US, since it is legal only in the US, Pandora is a handy online music streaming service. You "train" Pandora by providing it with feedback on the songs it chooses to play to you. I was thinking "they must have an API which could be used to write an app to train Pandora more efficiently than voting on each song in real-time". But, I discovered that Pandora doesn't have an API (unless you count the client-side JavaScript API referenced here, but it can't be run over the network of course).

When you think about it, it's obvious why Pandora would not have an API. They make money from advertising. If you can mould your own app like a piece of modelling clay, incorporating a feed of music from Pandora but not Pandora's advertisments, then Pandora lose out on money.

A pity though. Imagine, for example, an app which would go through your local collection of MP3's and mark each one as a "thumbs up" on your Pandora profile? That would be useful. But there is no public API to build such a thing...

Wednesday, July 15, 2009

"The threat of access by a third party increases exponentially with the move to the cloud"

Peter Kafka at the Wall Street Journal's "All things digital" covers the Twitter security breach today, noting that the problem was that so much information was being stored by Twitter (the company, not the service) in the "Cloud" using Google. By breaking into a Gmail account, an attacker was able to access many Twitter corporate documents. Unlike the case a few years ago, these documents were not stored behind a security perimeter on a corporate LAN, but rather they were stored on Google's cloud-based services (of which Gmail is just one).

Albert Wenger (an investor in Twitter, as a matter of fact) suggests the usage of two-factor authentication to Gmail (and other Cloud-hosted apps) as one solution (Cloud/Web App Security - A Modest Proposal). This addresses the authentication problem (whereby an attacker can attempt to guess a password to Gmail). But, this has been difficult for banks to implement across the board, whether with RSA SecurID tokens or other one-time password systems. Users find them too awkward. And if that is a problem for online banking, what are the chances of it two-factor authentication being adopted en masse for Gmail?

But let's look at the problem another way.

Consider the difference between stealing documents from someone's Gmail account (and associated Cloud services) and stealing the documents from corporate servers behind a firewall. And add to this the fact that the stolen documents are now being published by TechCrunch. On Robert X Cringley's blog, an anonymous commenter mentions this distinction: "If the information had been stolen from Twitter's own servers, would TechCrunch be as quick to publish it?".

From an attacker's point of view, stealing the documents from the corporate network may involve getting physical access or VPN access. Stealing the same documents from a Cloud-based service like a Gmail account means guessing a password and then using that account to do "password recovery" to gain other passwords.



In the diagram above, which is easier: 1 or 2? Of course, option 2 is easier.

One option to fix this is Albert Wenger's modest proposal: Increase the strength of authentication to the Cloud Services. This is certainly one option.

But think about where that security perimeter is. In the "old world" where documents were on a server somewhere, the server was inside that security perimeter. If you put a perimeter around the Cloud environment, you still have the problem that the documents can easily travel outside that perimeter (after all, many are in the form of email attachments anyway - by definition they have already been sent elsewhere).

Think about how, logically, the perimeter should be shrunk right down to the documents themselves:



This means that the sensitive documents are encrypted (and signed for tamper-evidence). So how could this be implemented? One answer is to host the documents using a service such as Amazon's S3 and email URLs to the documents (which is how S3 works anyway - each document is represented as a URL-locatable resource) instead of passing around the documents themselves. Then use a Cloud Gateway to selectively encrypt and sign the documents being sent up for storage, so that they are stored encrypted and signed, and then, for authorized users who can perform strong authentication, decrypt the documents as they are retrieved. Albert Wenger's proposal of SMS-ing the decryption key to the reader can also be used (so an attacker may get the encrypted document, if they hack into the email account, but will not get the key).

In that way, the documents are not being stored in the clear. Google or Amazon, or any cloud provider, does not have to be trusted. An attacker could then guess a Gmail (or other hosted email provider) password, but the strong authentication would be required in order to access the documents themselves referenced in emails.

Much of the discussion about the Twitter hack has focussed on authentication. This is important. But, consider a hospital with medical data (the example I used in my IBM DeveloperWorks series on Cloud Computing security). The requirement is that the data itself is secured. If the data is being stored by Amazon or Google in the clear, but strong authentication is required to access it: the data is still being stored in the clear. That has privacy and compliance implications. I'm proposing that another way to look at it is to see the perimeter as being around the data itself, to secure that, and to not trust the Cloud provider.

More on Cloud Security here.

Testing Bing's API

One of the interesting things about Bing is that, unlike Google Search, their search API carries no usage quota. The only API usage restriction is the requirement that Bing's API is used for “user-facing applications” only (i.e. not for data-mining or data-harvesting applications).

You can read all about Bing's API here
. The Bing API is an updated version of Microsoft's Live Search API, which explains why Bing's API is already at Version 2.0, despite Bing itself being just a few weeks old.

So let's put Bing's API to the test using the SOAPbox Web Services testing tool...

Open up SOAPbox, and then select "Request Settings" under the drop arrow next to the address bar:



Now enter the details for connecting up to Bing. The URL must contain your App ID. To quote Microsoft:

Getting an AppID is a straightforward process. First, go to the Bing Developer Center and sign in with your Windows Live ID. After signing in, you will be presented with a link to create a new AppID. Click the link, then supply basic information about your application and review the Terms of Use.

Your AppID then goes into the SOAPbox "Request Settings" page as shown below:

http://api.search.live.net/xml.aspx?Appid=PutYourAppIdHere&query=vordel&sources=web

Be sure to set the verb as "GET" (we're testing the REST API first - although SOAPbox has "SOAP" in its name, you can use it to test REST services).



Now press the green button to send the request up to Bing. We then see the search results in the results pane:



Now the fun begins. Click over from "Classic" mode to "Design" mode. In the bottom-left, click on the "Stress" tab. Configure a stress test like the one below. I've set it to send 100 searches up to Bing, using the same API key each time:



You can see from the results that all the requests went through fine. HTTP 200's all round. Bing's API had no problem with this kind of usage.

But, as Ralf Rottmann notes, you must "Restrict the usage to less than 7 queries per second (QPS) per IP address. Exceeding this limit must be approved with the team at api_tou@microsoft.com".





I guess you could say "this is not really a 'user-facing application', it's an example of automating a request". But, I'm a user and I'm facing SOAPbox :-)

Grab your copy of SOAPbox here and happy testing of Bing (and other APIs too).

Monday, July 13, 2009

Lessons from SOA governance

Last month Joe McKendrick wrote that SOA, IT and cloud governance are converging into 'total services governance'. SOA Governance should go hand-in-hand with cloud governance, and both should link into IT governance (systems monitoring, reporting). Here at Vordel, we've seen many lessons from SOA Governance carry over to our implementation of cloud governance. For example, like SOA Governance, cloud governance is a non-runner if it adds latency to services themselves. These are lessons we've built into the Vordel products, designed for speed as well as governance.

Here is an IBM DeveloperWorks article I wrote on how lessons from SOA Governance apply to Cloud governance:

Friday, July 10, 2009

SOAPbox is now free! Web Services testing for the masses

You know that bit in the Simpsons when Homer refuses to believe that pork chops, bacon and ham all come from the same animal? He says "Heh heh heh. Ooh, yeah, right, Lisa. A wonderful, magical animal". Well, imagine if there was a "wonderful, magical" testing tool which would
  • Load-test a Web Service while showing real-time test feedback
  • Issue certificates (in fact act as a mini-PKI)
  • Populate SAML Assertions
  • Support JMS as well as HTTP/SSL
  • Negotiate Kerberos as well as mutual SSL
  • ...and dynamically insert attack vectors?
All in the same tool? Well, there is such a tool: Vordel SOAPbox. And it is now free.

Grab it at:
http://www.vordel.com/products/soapbox/

Wednesday, July 8, 2009

Vordel Featured in Government Computer News

"Deployed on an organization’s local network, the Vordel Gateway Cloud Edition acts as the pivot point between applications and cloud-based services, providing an on-ramp from local applications to the cloud".

The full story by Trudy Walsh here:
http://gcn.com/articles/2009/07/08/vordel-adds-interoperability-to-the-cloud.aspx

Downsides of Schemas

Prompted by Dave Wichers, Gunnar Peterson has a good piece today about the use of hardened schemas.

Schemas are indeed useful, but here are some downsides:

- The availability of a schema helps in a plaintext-guessing attack against encrypted data, since an attacker knows what the structure of the unencrypted data is, names of element and attributes, and maybe default values also.

- Applications which are coded to validate all incoming XML can be diverted to a malicious Schema using the SchemaLocation attribute. The malicious Schema could include very compex checks which would choke a parser. This behavior can be turned off in some platforms, for example here is how it's done in .NET: http://msdn.microsoft.com/en-us/library/ms763691(VS.85).aspx . I should note that Gunnar's code is not vulnerable to this attack since he specifies the Schema in the "schemaFile" variable. But many applications are. It is a neat way to turn a security measure against itself.

- As Gunnar says, Schema validation is only as good as the Schema itself. Most Schemas are just about the data-types ("this is a string, this is a string, this is also a string", etc). That is not useful for security purposes.

- Schemas define what should be in an XML document, but are not useful for defining what should not be in an XML document. That is where threat scanning for attack signatures (e.g. SQL Injection) comes in.

- Schema validation does not apply to RPC-encoded SOAP messages (partly because type information is included in each element that appears within the SOAP message). Unfortunately, RPC-encoded SOAP still exists in the wild. However, here at Vordel we do the seemingly impossible in our XML Gateway: allowing RPC-encoded SOAP messages to be validated against Schemas. If you need to validate RPC-encoded SOAP messages, check out the Vordel XML Gateway.

- And the biggie: Performance. Without an XML Acceleration system such as VXA in place, Schema validation can add significant latency to message throughput. In fact, that is one of the big reasons why Schema Validation is often skipped in Java and .NET apps (a bad idea!).

I'm not discouraging Schema usage, just saying that some caveats have to be kept in mind.

Monday, July 6, 2009

"APIs will be the answer" for Cloud interop

InformationWeek had 4th July story about US Government usage of Cloud Computing. Once again, interoperability and security are flagged as issues. It quotes an answer for interoperability though:
"On the question of interoperability, APIs will be the answer"
http://www.informationweek.com/news/government/technology/showArticle.jhtml?articleID=218400025

APIs are indeed how Cloud Computing platforms can link together. But it would be a mistake to think that APIs imply that programming is needed also. Infrastructural products like Vordel's Gateway Cloud Edition allow applications to link to Cloud services, and Cloud services to link to other Cloud services, using APIs but without coding. APIs are one part of the solution, for sure, but the ability to monitor Cloud usage, to apply policies to Cloud services, and to alert on outages are also vital. These are not delivered by APIs alone, but they are the mainstay of Gateway products which have been doing this for similar API-based integration for many years now.

Here is a screenshot of the configuration for linking a local Web Service to an Amazon SQS service using Vordel's Gateway. It uses APIs under the hood, but the configuration does not involve coding, and is just drag-and-drop. The policy governance, monitoring, and visibility is all provided as standard:

Friday, July 3, 2009

Bridging the interoperability gap for Cloud Computing

Analyst reaction to Vordel's Cloud Edition Gateway:

Phil Schacter, Vice President and Service Director with Burton Group said "The cloud services market is immature with few standards on how customers establish and control access by their users, and how providers protect information and report activity back to the customer. The concept of an enterprise gateway that connects to all internal and public cloud services accessed by various departments and users is an important innovation that allows a focal point for enforcing policy and auditing usage of services, and that is independent and transparent to specific cloud providers."
http://www.vordel.com/news/press/30_06_09.html

This is the key - an enterprise gateway being the focal point for enforcing policy and auditing usage of services (or "container" in the terminology proposed by Gunnar Peterson) . And it is another kind of point: the pivot point between local applications, Cloud services, and between the Cloud services themselves.