Thursday, July 30, 2009
Wednesday, July 29, 2009
Today we at Vordel released Version 5.2 of our Gateway, which connects to Cloud Services.
If you need to connect your systems to SalesForce.com, Amazon S3 or SQS or SimpleDB, or Microsoft or Yahoo! Web Services, with security and without writing code, then this is the product for you.
Here's the release:
Vordel connects SOA to the Cloud
Vordel 5.2 release delivers control and visibility of SOA and Cloud services
San Diego, CA - July 29, 2009 - Vordel, the XML networking management company, today announced at the Burton Catalyst Conference the general availability of Vordel 5.2, its suite of application networking products. Vordel 5.2 enables enterprises to connect their applications to Cloud services such as Amazon Web Services, SalesForce.com, and Microsoft Azure. Vordel benefits enterprises by resolving the complexity of connecting SOA and Cloud-based services and providing a secure and transparent environment for this process.
To successfully transition and adapt to a cloud environment, organizations must implement the core tenets of a governance process. Vordel 5.2 governs the deployment and usage of services deployed either in a SOA or Cloud-based environment. Central to 5.2 is the Vordel Gateway, which acts as the focal point for enforcing policy and auditing usage of services, and that is independent and transparent to cloud providers
Vordel CEO, Vic Morris, speaking about the announcement at the Burton Catalyst Conference said "The rules for Governance of SOA and Cloud computing are very similar; enterprises need to control access by end users of their services, monitor the service usage and guarantee service availability."
"The economies of using Cloud Computing are compelling. But, without governance in place, Cloud Computing is a false economy. With Vordel 5.2, CIOs can maintain full visibility and control of service usage whilst leveraging the efficiency and cost savings of Cloud computing. It also appeals to those developers and architects who need a fast and secure way to integrate Cloud-based services with their local SOA applications," added Morris.
What's new in 5.2?
Key features of Vordel 5.2 include:-
- Out of the box connectors to Cloud providers including Amazon Web Services
- API Key support
- Negotiation of SalesForce.com Session IDs
- Automatic WS-Policy advertisement and consumption
- Greatly enhanced visibility and reporting of Services usage via Vordel Reporter
- New free edition of the latest Vordel SOAPbox Web Services stress and security testing tool
- Integration with Oracle Access Manager and Enterprise Manager
- Support for TIBCO Rendezvous and EMS
- Now available as an Amazon EC2 Image in addition to traditional appliance and software options on Windows, Linux and Solaris
This announcement supports Vordel's strategy of delivering high-performance application networking solutions built on standards-based XML technologies.
Vordel is an XML network management company that provides high performance, enterprise-level hardware and software products to enable enterprises confidently deploy SOA and Cloud-linked applications. Vordel accelerates, manages and protects XML applications to enable enterprises govern their Web Service and Cloud service usage and ensure service performance. For more information visit http://www.vordel.com
+353 1 234 2500
+ 353 86 834 9329
Well, maybe not 20 years ago, but in many ways there is nothing new about REST. For me, it goes back to a 1996 article by Jon Udell in Byte Magazine entitled "On-Line Componentware". You can read the article via Archive.org - at http://web.archive.org/web/*/http://www.byte.com/art/9611/sec9/art1.htm . The article described calling AltaVista using a Perl Script. It's an issue with "Java Chips" on the cover, and it's sitting in a box in an attic in north Dublin - stuff I never got around to bringing over to Boston. Jon Udell's column talked about calling a URL programmatically by passing parameters in a Query-String, and then parsing the HTML results into Perl variables. Simple in retrospect, like a lot of good ideas, but it was revolutionary to read that every website is a software component and that "A powerful capability for ad hoc distributed computing arises naturally from the architecture of the Web". Websites become APIs. For me, this idea provided the ideas behind projects I implemented for the Irish government Revenue service (equivalent of the US IRS).
Fast forward to 2009, and Cloud Services are presented as APIs, with API Keys, called using REST patterns. It is reasonable to say "there is nothing new here". Look at how Bing exposes a Web API. It's very similar to how Jon Udell was calling AltaVista in 1996, except with the addition of the API Key in order to deter abuse.
One new thing is that there is now SOAP as well as REST. But even there, it shouldn't be a case of "one or the other". If you want to bridge REST to SOAP-based Web Services, check out this post on converting REST to SOAP using the Vordel Gateway.
Tuesday, July 28, 2009
The abstract for Richard's talk explains that:
Service design principles, such as clean separation of concerns and loose coupling, should guide our service modeling. The need to apply these design principles is only heightened if you surrender control over location, implementation, or quality of service to an “externalized” source, such as a cloud or SaaS provider.
Ironically, it is difficult to provide a link to the description of Richard's talk on the Catalyst website because the site's use of a Flash-based schedule means that they "surrender control over location" of the individual URLs for the talks. So it's difficult to post a link to a Catalyst talk in a blog, or indeed Twitter. But, using Firebug, you can work around the Flash and get the URL for Richard's talk, which is:
Do this mean that the Flash interface on the Catalyst site, which masks the URL-addressable locations of the individual talks (making it tricky for bloggers or Twitterers to link to them) is a "folly"? :-)
Friday, July 24, 2009
Check it out by clicking on the image below:
Thursday, July 23, 2009
SOA as "architecture for the sake of architecture" may rightfully be dead, but many organizations are happily connecting systems together using services (remember, the second part of "SOA is dead" is "Long live services"). Check out our Google Map of Vordel customer case studies. Many of those customers are expressly not doing SOA, but all are very successfully connecting systems together, using service patterns.
So, with the free drink and the conversation about "SOA is dead", is the event tonight just like an Irish wake for SOA? Well, if it is, then the conversation may go like this:
(First drink consumed)
You know I always liked SOA. I wouldn't hear a bad word said against SOA.
(A few more drinks consumed)
You know what always annoyed me about SOA - all this high-falutin' talk for the sake of it. I always wished SOA would just focus on getting stuff done, not all this talk. To tell ye the truth, I always preferred the services over SOA.
(many more drinks consumed)
It moved! I'm sure I saw SOA move there! Maybe SOA is not dead after all!
Wednesday, July 22, 2009
Tuesday, July 21, 2009
Monday, July 20, 2009
I've seen the "API Key" pattern also crop up in SOAP messages before. It's reasonably common practice to use a WS-Security block as the way to encapsulate the API Key, rather than, say, coming up with a new custom token.
The problem which Scott ran into was how to create this password-less WS-Security block using WCF. This is something that's easy to do with the SOAPbox testing tool - here is how:
First load in your SOAP message (or generate one from WSDL). Next, in the "security" menu, choose "Insert WS-Security Username":
You'll notice that you have the option to include a password, or not include a password, by selecting the checkbox:
Here we've chosen not to include a password. When we look at the message in the design view in SOAPbox, we see the WS-Security token there, but no password:
The neat thing about SOAPbox is that the configuration carries over to the Vordel XML Gateway. So the same screens which you use to configure SOAPbox are used to configure the Vordel Gateway. Let's say you want to deploy the Vordel Gateway to create the SOAP messages containing API Keys in password-less WS-Security UsernameToken blocks, as in Scott Hanselman's example. Basically you are converting from a HTTP GET (REST) to SOAP (I've written about how to do REST-to-SOAP conversion with the Vordel Gateway before) while taking the API Key from the HTTP headers and dynamically inserting it into a UsernameToken. In this way, you can support "traditional" browser clients sending API Keys, while also supporting the back-end Web Service which requires this non-traditional usage of a UsernameToken to encapsulate an API Key. This is shown below:
So, an XML Gateway is a way to support this type of "API Key in UsernameToken" case, while SOAPbox is a way to test it from the client-side perspective.
Get more information on the Vordel Gateway here.
...and Grab your copy of SOAP box here.
Friday, July 17, 2009
The 53-slide presentation includes:
- Reference Architecture for XML Gateways and Endpoint Agents working together
- 5 Real-Life Case Studies
- Lessons Learned
Thursday, July 16, 2009
When you think about it, it's obvious why Pandora would not have an Open API. They make money from advertising. If you can mould your own app like a piece of modelling clay, incorporating a feed of music from Pandora but not Pandora's advertisments, then Pandora lose out on money.
A pity though. Imagine, for example, an app which would go through your local collection of MP3's and mark each one as a "thumbs up" on your Pandora profile? That would be useful. But there is no public API to build such a thing...
Wednesday, July 15, 2009
Albert Wenger (an investor in Twitter, as a matter of fact) suggests the usage of two-factor authentication to Gmail (and other Cloud-hosted apps) as one solution (Cloud/Web App Security - A Modest Proposal). This addresses the authentication problem (whereby an attacker can attempt to guess a password to Gmail). But, this has been difficult for banks to implement across the board, whether with RSA SecurID tokens or other one-time password systems. Users find them too awkward. And if that is a problem for online banking, what are the chances of it two-factor authentication being adopted en masse for Gmail?
But let's look at the problem another way.
Consider the difference between stealing documents from someone's Gmail account (and associated Cloud services) and stealing the documents from corporate servers behind a firewall. And add to this the fact that the stolen documents are now being published by TechCrunch. On Robert X Cringley's blog, an anonymous commenter mentions this distinction: "If the information had been stolen from Twitter's own servers, would TechCrunch be as quick to publish it?".
From an attacker's point of view, stealing the documents from the corporate network may involve getting physical access or VPN access. Stealing the same documents from a Cloud-based service like a Gmail account means guessing a password and then using that account to do "password recovery" to gain other passwords.
In the diagram above, which is easier: 1 or 2? Of course, option 2 is easier.
One option to fix this is Albert Wenger's modest proposal: Increase the strength of authentication to the Cloud Services. This is certainly one option.
But think about where that security perimeter is. In the "old world" where documents were on a server somewhere, the server was inside that security perimeter. If you put a perimeter around the Cloud environment, you still have the problem that the documents can easily travel outside that perimeter (after all, many are in the form of email attachments anyway - by definition they have already been sent elsewhere).
Think about how, logically, the perimeter should be shrunk right down to the documents themselves:
This means that the sensitive documents are encrypted (and signed for tamper-evidence). So how could this be implemented? One answer is to host the documents using a service such as Amazon's S3 and email URLs to the documents (which is how S3 works anyway - each document is represented as a URL-locatable resource) instead of passing around the documents themselves. Then use a Cloud Gateway to selectively encrypt and sign the documents being sent up for storage, so that they are stored encrypted and signed, and then, for authorized users who can perform strong authentication, decrypt the documents as they are retrieved. Albert Wenger's proposal of SMS-ing the decryption key to the reader can also be used (so an attacker may get the encrypted document, if they hack into the email account, but will not get the key).
In that way, the documents are not being stored in the clear. Google or Amazon, or any cloud provider, does not have to be trusted. An attacker could then guess a Gmail (or other hosted email provider) password, but the strong authentication would be required in order to access the documents themselves referenced in emails.
Much of the discussion about the Twitter hack has focussed on authentication. This is important. But, consider a hospital with medical data (the example I used in my IBM DeveloperWorks series on Cloud Computing security). The requirement is that the data itself is secured. If the data is being stored by Amazon or Google in the clear, but strong authentication is required to access it: the data is still being stored in the clear. That has privacy and compliance implications. I'm proposing that another way to look at it is to see the perimeter as being around the data itself, to secure that, and to not trust the Cloud provider.
More on Cloud Security here.
Tuesday, July 14, 2009
You can read all about Bing's API here. The Bing API is an updated version of Microsoft's Live Search API, which explains why Bing's API is already at Version 2.0, despite Bing itself being just a few weeks old.
So let's put Bing's API to the test using the SOAPbox Web Services testing tool...
Open up SOAPbox, and then select "Request Settings" under the drop arrow next to the address bar:
Now enter the details for connecting up to Bing. The URL must contain your App ID. To quote Microsoft:
Your AppID then goes into the SOAPbox "Request Settings" page as shown below:
Be sure to set the verb as "GET" (we're testing the REST API first - although SOAPbox has "SOAP" in its name, you can use it to test REST services).
Now press the green button to send the request up to Bing. We then see the search results in the results pane:
Now the fun begins. Click over from "Classic" mode to "Design" mode. In the bottom-left, click on the "Stress" tab. Configure a stress test like the one below. I've set it to send 100 searches up to Bing, using the same API key each time:
You can see from the results that all the requests went through fine. HTTP 200's all round. Bing's API had no problem with this kind of usage.
But, as Ralf Rottmann notes, you must "Restrict the usage to less than 7 queries per second (QPS) per IP address. Exceeding this limit must be approved with the team at email@example.com".
I guess you could say "this is not really a 'user-facing application', it's an example of automating a request". But, I'm a user and I'm facing SOAPbox :-)
Grab your copy of SOAPbox here and happy testing of Bing (and other APIs too).
Monday, July 13, 2009
Here is an IBM DeveloperWorks article I wrote on how lessons from SOA Governance apply to Cloud governance:
Friday, July 10, 2009
- Load-test a Web Service while showing real-time test feedback
- Issue certificates (in fact act as a mini-PKI)
- Populate SAML Assertions
- Support JMS as well as HTTP/SSL
- Negotiate Kerberos as well as mutual SSL
- ...and dynamically insert attack vectors?
Grab it at:
Wednesday, July 8, 2009
The full story by Trudy Walsh here:
Tuesday, July 7, 2009
Schemas are indeed useful, but here are some downsides:
- The availability of a schema helps in a plaintext-guessing attack against encrypted data, since an attacker knows what the structure of the unencrypted data is, names of element and attributes, and maybe default values also.
- Applications which are coded to validate all incoming XML can be diverted to a malicious Schema using the SchemaLocation attribute. The malicious Schema could include very compex checks which would choke a parser. This behavior can be turned off in some platforms, for example here is how it's done in .NET: http://msdn.microsoft.com/en-us/library/ms763691(VS.85).aspx . I should note that Gunnar's code is not vulnerable to this attack since he specifies the Schema in the "schemaFile" variable. But many applications are. It is a neat way to turn a security measure against itself.
- As Gunnar says, Schema validation is only as good as the Schema itself. Most Schemas are just about the data-types ("this is a string, this is a string, this is also a string", etc). That is not useful for security purposes.
- Schemas define what should be in an XML document, but are not useful for defining what should not be in an XML document. That is where threat scanning for attack signatures (e.g. SQL Injection) comes in.
- Schema validation does not apply to RPC-encoded SOAP messages (partly because type information is included in each element that appears within the SOAP message). Unfortunately, RPC-encoded SOAP still exists in the wild. However, here at Vordel we do the seemingly impossible in our XML Gateway: allowing RPC-encoded SOAP messages to be validated against Schemas. If you need to validate RPC-encoded SOAP messages, check out the Vordel XML Gateway.
- And the biggie: Performance. Without an XML Acceleration system such as VXA in place, Schema validation can add significant latency to message throughput. In fact, that is one of the big reasons why Schema Validation is often skipped in Java and .NET apps (a bad idea!).
I'm not discouraging Schema usage, just saying that some caveats have to be kept in mind.
Monday, July 6, 2009
"On the question of interoperability, APIs will be the answer"
APIs are indeed how Cloud Computing platforms can link together. But it would be a mistake to think that APIs imply that programming is needed also. Infrastructural products like Vordel's Gateway Cloud Edition allow applications to link to Cloud services, and Cloud services to link to other Cloud services, using APIs but without coding. APIs are one part of the solution, for sure, but the ability to monitor Cloud usage, to apply policies to Cloud services, and to alert on outages are also vital. These are not delivered by APIs alone, but they are the mainstay of Gateway products which have been doing this for similar API-based integration for many years now.
Here is a screenshot of the configuration for linking a local Web Service to an Amazon SQS service using Vordel's Gateway. It uses APIs under the hood, but the configuration does not involve coding, and is just drag-and-drop. The policy governance, monitoring, and visibility is all provided as standard:
Friday, July 3, 2009
Phil Schacter, Vice President and Service Director with Burton Group said "The cloud services market is immature with few standards on how customers establish and control access by their users, and how providers protect information and report activity back to the customer. The concept of an enterprise gateway that connects to all internal and public cloud services accessed by various departments and users is an important innovation that allows a focal point for enforcing policy and auditing usage of services, and that is independent and transparent to specific cloud providers."
This is the key - an enterprise gateway being the focal point for enforcing policy and auditing usage of services (or "container" in the terminology proposed by Gunnar Peterson) . And it is another kind of point: the pivot point between local applications, Cloud services, and between the Cloud services themselves.