Friday, August 29, 2008
These measures would have helped in the many high-profile data loss cases in the UK, such as the loss of all prisoner data this week. If the access to the prisoner data was provided via a managed and controlled Web Service, then the pulling down of the entire prisoner dataset would have been blocked. Additionally, personal information could have been selectively encrypted prior to being passed back to the requester. An XML Gateway allows you to configure a rule that says "if you see a fragment of information that looks like a personal record, encrypt it or strip it out".
At it stands, presumably the information is obtainable via a SQL query to a database, and access to it is a mapper to topology, not policy (i.e. if you are on the right network, you get access to the data). It should be a matter of policy, not topology. i.e. Even if you get onto the most sensitive parts of the government network, your access to the informatin is still controlled by policy.
I remember talking an architect from another part of the UK government who said that they did not need data-level security because "the network is secure" (the old security paradigm of the "hard crunchy shell and the soft chewy center"). But, that UK government department has since also suffered from its own well-publicized incidents of data loss. Data-level security would have been a safeguard to this.
Thursday, August 28, 2008
I spoke at the RSA Europe conference in 2001 which was run as a "Virtual Conference" because people did want to fly after 9/11. That was run using Web Conferencing technology (LiveMeeting if I recall). Nowadays, Second Life style environments should make a virtual conference a lot more compelling. So, I'll be "there" at the Infoworld conference.
My question is "Will Infoworld run a virtual conference about virtualization?". If that happens, the universe may fold in on itself and implode.
Wednesday, August 27, 2008
Here is an example of a SchemaLocation directive in an XML message in SOAPbox :
Contrary to the common assumption, dereferencing the SchemaLocation on an untrusted message is a bad, bad idea. Think of the situation where an attacker can point the XML parser to a bogus schema, a schema designed to blog a parser, or to a script which serves up an endless stream of bytes.
For this reason, Vordel's XML Gateway contains a Schema Cache (highlighted in the screenshot below). This is a trusted store of Schemas. The Schemas can come from a repository, or from WSDLs which have been imported (or both). But, the key point is that SchemaLocation directives are not being naively trusted.
Sending bogus SchemaLocation directives is just one technique employed when doing a vulnerability assessment of a Web Service. I have described others in my presentation at RSA 2008 back in April on Web Services vulnerability assessment.
Tuesday, August 26, 2008
What the "screen-scraper" websites are doing is using the Ryanair website as a Web Service. "Screen-scraper" was the old phrase, conjuring up an image of a green screen mainframe. But, bang-up-to-date, this story is a nice example of "WOA" (Web Oriented Architecture) where the Ryanair Website unwittingly becomes part of the "Global SOA", to be used by applications.
Is "WOA" really new? I urge everyone to read this Byte article from Jon Udell in 1996,
The fact that the Ryanair site is being used as a software component, even though Ryanair expressly do not want it to be used in this way, shows the power of WOA. You literally can't hold it back.
Or can you? In Irish law, there is a precedent for this. The full details are in this story by Eoin Licken in the Irish Times Archive from 1998. I've pasted snippits below:
Irish companies putting information on Websites should stipulate terms and conditions for how their sites are used, following the dismissal of the State's first prosecution for unauthorised accessing of data earlier this year.
The case also highlights the dilemma faced by online information providers: how to limit access to valuable information in a medium designed for fast information transfer. Last April, Mr Alister Kidd, managing director of Touchtel, became the first person to be charged with unauthorised access to data under the 1991 Criminal Damage Act. The prosecution followed a complaint by Kompass Ireland, which runs an online database of company information, that Mr Kidd had found a way to bypass its site's technical restrictions and download company information from the database more quickly. Kompass says Mr Kidd wrote a computer program to automatically download records of company information every five seconds, a technique it calls "harvesting". He was traced via the address of the computer he used to download the records.
Legal sources say they were surprised the case arose at all, and the major lesson from it is the need for terms and conditions on Websites. Mr Kidd says the lesson is: "If you've got a site, specify what the usage is for."
However, not everyone is satisfied with the need for explicit terms and conditions on Websites. Mr Alex French of Medianet, Touchtel's Internet service provider at the time, says the need for disclaimers to prevent unauthorised access is "akin to requiring shops to put a `You may not break into this shop' sign up at night". He says the case has a profound impact for the Internet community in Ireland. The inspector in charge of the Garda Computer Crimes Unit says the issues surrounding access to data are still not clear. "If you are prepared to put information in the public arena you're inviting public access," says Insp Eugene Gallagher, but he adds: "It's unclear if someone comes in the window instead of the door."
Monday, August 25, 2008
This is even more relevant for Web Services. "Policy" for Web Services incorporates not only Access Control (i.e. who can use which Web Service), but also reliability (where to raise an alert if a Web Service is not responding), and archival (where to log messages).
In the past, the access to an application depended on where you put it on the network. "If you put it here, then these people can access it. If you put it over on this subnet, then these other people can access it, and these other people are notified if it is unavailable". Policy Director changes this: it virtualizes the policy framework, so that when you deploy a Web Service anywhere on the network, Policy Director directs the policy down to the XML Gateway which controls that Web Service, wherever that may be. If an organization has a registry, we leverage that as part of this policy deployment. For this to work, policies have to be truly reusable across the enterprise, not bound up with resources (i.e. with the Services). This is something we enable in our framework.
There is a lot of talk about virtualization for this, and that. Virtualization of a policy framework is the logical step.
Friday, August 22, 2008
1) Performance. XML processing takes up significant CPU resources. So does cryptography. Together, XML Decryption creates a "perfect storm" of CPU usage.
2) Key generation and key management. It can be very tricky indeed to generate cryptographic keys and then to store them safely on hardware.
The good news is that XML Gateways address both issues. They are high-performance, and they typically include hardware for key storage. Vordel's XML Gateway also includes a simple tool for generating certificates and private keys.
We are going to setup a demo whereby a client encrypts part of an XML message using a public key, and then an XML Gateway decrypts the encrypted data using the corresponding private key. This is shown in the schematic below:
To set this up, the first thing you need to run this demo is a copy of the SOAPbox testing tool and a copy of the Vordel XML Gateway (grab a XML Gateway evaluation here). We are going to use the Policy Studio to generate the keys.
The high-level steps are:
Step 1) Create the public and private keys in Policy Studio, then export them.
Step 2) Import the keys into the SOAPbox
Step 3) Create the XML Decryption policy in Policy Studio
Step 4) Perform the Encryption in SOAPbox and send the encrypted message to the XML Gateway, where it is decrypted.
Let's get started...
Step 1 - Creating the certificate and private key.
For this demo, we are going to create a self-signed certificate. In a real deployment scenario, of course, you would use a certificate from a trusted CA such as VeriSign, or a corporate CA.
In Policy Studio, open the Certificates configuration by clicking on “Certificates” on the left-hand side:
Press “Edit” beside the “Subject” and enter details. You only need to enter the Common Name and Company Name. Then press “Sign Certificate” and choose “Self-Sign” in order to create the private key also. You can choose “Use Distinguished Name” for the name of the certificate, used to identify it later.
Now press on “Export Certificate and Key”. Choose the “PEM” format. Enter a password, and remember the password because you will need it later
2) Importing Certificate into SOAPbox
In SOAPbox choose “Security “ from the menu, then “View Certificates”.
Press on “Create” and then “Import certificate+key”. Load in the certificate file (“PEM” file) which you created in the previous step (note: strictly speaking we only need the public key for this demo, as we're only doing XML Encryption on the client. If you want to follow on and do XML Signature on the client, you'll need the private key. Note also that if you use an XML Gateway appliance, then key exporting is highly controlled).
3) Configuring the XML Decryption policy in Policy Studio
Back in Policy Studio, right-click on the policies and choose “Add Policy”.
Give the policy a name:
Enter the settings “Decrypt all” and “Find via KeyInfo in message”, as shown below:
Now, right-click on the “XML Decryption Settings” filter and choose “Set as start”.
Next, put an XML Decryption filter under it. This is also to be found under the “Encryption” group.
We want to echo the decrypted message back to the client (i.e. back to SOAPbox). So we now add a “Reflect” filter at the end of our policy. This is to be found under the “Utility” group. The policy now looks like this:
Now, at the “XML Gateway” level, right-click and choose “Add Relative Path”, as shown below:
Create a path called “/decryption” and map it to the policy you just created.
Be sure to press F5 to push the updated policies put to the XML Gateway. If you are using Policy Director then you must have deploy privileges for that XML Gateway.
4) Seeing XML Decryption in action with SOAPbox
Finally we perform the Encryption in SOAPbox and send the encrypted message to the XML Gateway, where it is decrypted.
In SOAPbox, using the “Classic Mode”:
In the SOAPbox screenshot below, you can see the encrypted data in the left-hand (outbound) side. Press “Send Request”. We now see the message being sent to the decryption Web Service. In the response, on the right-hand-side, the data is decrypted: That is all there is to setting up XML Decryption using SOAPbox, the Policy Studio, and the Vordel XML Gateway.
The answer is: The response back from the Web Service.
It is easy to overlook the fact that an XML Gateway processes not only input to the Web Service, but the response from the Web Service also. Indeed, many deployment plans for XML Gateways ignore the processing of the response altogether. The thinking is "The input is coming from an untrusted client, so we have to process that. But, the response comes from our own systems so it is under our control, so we should just pass it unchanged back to the client".
So what should the XML Gateway do with the response?
1. Sign Responses for non-repudiation
It is often a good idea to sign messages which come back from a Web Service. This adds non-repudiation so that a client cannot claim to have gotten a different response a Web Service.
Let's look at how this is configured in the Vordel Policy Studio [this is our design-time policy creation and editing tool. If you don't have a copy, grab an evaluation]. In the Vordel Policy Studio, it is a simple matter to add response processing, by adding filters which follow routing filters (and therefore act on the response back from the Web Service). In the following screenshot we see a policy which is performing WS-Security UsernameToken authentication, input validation, XML Enrichment, then routing to a Web Service on WebLogic.
We can drag in a "Sign Message" filter (from the "Integrity" group on the right) and chain it after the filter which routes to the WebLogic Web Service. This means that the response back from WebLogic is signed by the XML Gateway before it is returned to the client.
Signing at the XML Gateway has a number of advantages over signing the response at the application server. For a start, the XML Gateway will use a key which is stored in hardware, rather than being stored in a software keystore on the file system of an application server. Also, in the case of Vordel's XML Gateways, it uses a patented XML acceleration subsystem (VXA) which results in significantly faster XML Signature processing than at an application server.
"De-identification" means removing any identifying information from documents. For example, in healthcare, de-identification means removing patient information from a document such as a medical test result.
An XML Gateway is ideally positioned to perform de-identification, and by using XML Acceleration techniques it can perform this task without adding latency.
3. Data loss prevention
This means scanning the responses from Web Services to ensure that no information is leaking out which should not leak out. In the Vordel XML Gateway, you can define structures of confidential information and, if these structures are detected in the responses from Web Services, then the confidential information can be selectively encrypted, or simply stripped out of the messages.
4. Sanity Checking
If a Web Service normally returns back 5K of reponse data, but the XML Gateway detects that it has output a response of 50K to a particular query, then this indicates a red flag. It could be that the request has caused the Web Service to malfunction, compromising its functionality.
In addition, if a Web Service returns back a stack trace, or details of exceptions, then this should be sanitized by the XML Gateway rather than being passed back as-is to the client.
Schema validation of the response is also a good idea (but remember that Schemas alone are not useful for detecting harmful content) and run significantly faster on an XML Gateway than at an application server.
And last but not least:
Input messages are typically logged by an XML Gateway, and the same should go for responses. An XML Gateway can sign the logged messages as it logs them, in order to keep an evidential audit trail. It is important that XML Gateways contain hard drives so that they can perform on-board logging, rather than forcing traffic to be sent out to a remote logging destination (adding extra latency per message).
Putting it all together, we have:
i.e. both the input and the response are processed by the XML Gateway
Thursday, August 21, 2008
- Loose Coupling
Interleaving the role of the Business Analyst with the Enterprise Architect would seem to make sense, as Joe McKendrick says: "with SOA increasingly taken on a business or enterprise hue, BAs may now have a lot to contribute to these efforts".
A key reason is to bring in business sensibility so that architecture-for-the-sake-of-architecture is not built out. Certain SOA components (such as XML Processing Offload) save costs, while others actually can increase costs. If this is changed, then the word "profit" (or cost-savings at least) can really replace the 7 dirty words.
Tuesday, August 19, 2008
Our latest Vordel Screencast describes how you can protect a Rich Internet Application (RIA) with our XML Gateways, using:
- Content Threat Detection
- Certificate Authentication
- Selective Rate Limiting
The last one, Selective Rate Limiting, is important because it allows you to setup the "Freemium" scenario, whereby users can use the service up to a certain limit, but once they reach that limit they need a token (an X.509 Certificate in this case) to use it beyond that limit. Generally, the token indicates that the client has paid a premium (hence the name "freemium": free up to a point, then you pay a premium).
Want to know more about RIA's? Here are some links:
- RIA's and the SOA Factor
- Better RIA's from the BT Ribbit acquisition
- ISec Partners: Living in the RIA World
Monday, August 18, 2008
I see a parallel with PKI ten years ago. Initially, the large vendors would tout PKI as a "top-down" architecture, a case of "build it and the applications will come". Some large organizations put PKIs in place, and then thought "hmm now what do we do with this?". Others were not so rash, and started out with the applications (SSL was the "killer app" for PKI) and then decided if they needed a PKI or not [many didn't, and used VeriSign or Thawte instead].
Simple XML-based integration, and simple Web 2.0 style mash-up applications, are arguably the killer apps for SOA. Both are enabled by XML Gateways, which apply the security and performance that is non-negotiable. But, these applications do not require a "full SOA", anymore than organizations ten years ago needed a "full PKI" in order to run the B2C and B2B Websites that used SSL.
Sunday, August 17, 2008
It mentions a team which "wrestled with the tradeoffs between REST- and SOAP-based services" before going with REST rather than SOAP:
"Railinc also dealt with technical decisions. Webb’s team wrestled with the tradeoffs between REST- and SOAP-based services, going with the former for greater simplicity. REST, short for Representational State Transfer, is an architectural style that doesn’t require header messages and other types of XML overhead. But it has no mechanism for establishing contracts between consumers and providers"
It doesn't have to be a tradeoff. With an XML Gateway, you can have both. You can present a SOAP interface to a REST service, or vice versa, doing the transformation on-the-fly at the XML Gateway. Through Service Virtualization, you can present a REST service in front of a SOAP service even if no "real" REST service exists (it's a "virtual service" exposed by the XML Gateway). This is one of the key features of an XML Gateway. Back on my older blog, here is an example of how REST and SOAP can co-exist, thanks to an XML Gateway.
Friday, August 15, 2008
One striking fact is that APIs and services have come together. A couple of years ago people may not have understood a Web/XML interface as an "API". But now a Web/XML interface it is not only "just another example of an API", but for many newer product it is main API interface which is provided.
For example, Vordel's products include a number of Web/XML interfaces of their own, for things like user provisioning. This is a kind of "eating our own dog food" because we use our own secure XML pipeline for those services. If people ask "do you have an API", we answer "yes, we have a Java API and a SOAP API". Different purposes, but both examples of APIs.
Thursday, August 14, 2008
Early bird registration ends this Friday, so click here to register.
Wednesday, August 13, 2008
Definitely any XML Security Appliance should support all the requirements which Igor Khurgin lists in the article. As Igor mentions, some XML Gateways do not support all types of SAML assertions (he notes that "Vendors also get picky about what SAML assertions they support (most support authentication and only a few support authorization and attribute)"). Vordel supports the consumption and generation of SAML Authentication Statements, SAML Attribute Statements, and SAML Authorization Statements (as well as supporting both sides of the SAML/XACML AuthorizationDecisionQuery message exchange).
The article is a great introduction, covering the base requirements. Most organizations will also need to add some specific requirements of their own. Nobody wants to buy an XML Security Gateway and then think "hang on, how does this thing work with our deployment of SiteMinder?", or hand it over to operations staff who ask "has it got an SNMP MIB I can load into OpenView?" and think "hmm I never asked the vendor that".
Monday, August 11, 2008
Friday, August 8, 2008
Service Virtualization is a specific SOA example of "virtualization". Service Virtualization is different from "Server Virtualization" [ Lori MacVittie contrasted the two different meanings of Server Virtualization in a post yesterday on F5 DevCentral ]
[ By the way, Jon Udell originally popularized screencasts, and they are an excellent way of "using technology to explain technology" (to re-use Jon's recent post title)]
Thursday, August 7, 2008
This report has implications for XML Gateways. Much of the reaction contrasts proprietary products with open source products, as if they are completely seperate. But, sometimes seemingly proprietary products include open source products. For example, it must be tempting to construct an XML Gateway by putting together a combination of Tomcat, MySQL, and other open source products onto an appliance platform, with some proprietary code as the higher-level "glue". In fact, this is how one of our competitor products is constructed. The dangers here are both responsibility and increased attack surface. You can address problems in your own code, but what if there is a problem found in MySQL? As a customer, you may ask the vendor about the security of their own coding, but what about the other products which run on their appliance? Do customers even know those products are there? Do any network admin want to wake up in the morning, look at Google News, see reports of a vulnerability in Tomcat and think "Isn't that what our XML Gateway runs on?". All of that is a potential nightmare. We avoid that for our customers by providing them with a product which we fully control.
Wednesday, August 6, 2008
Other options have included the password bookmarklet.
But, I suspect many Firefox users now use the "Do you want Firefox to remember this password?" feature. Here it is in use for logging into AA.com : (notice the top bar)
But did you know that if you use this feature, then when you view the site options for a site (by right-clicking anywhere on the site then choosing "View Info" from the context menu), then Firefox will show your saved usernames and passwords in the clear. This is shown below:
Bloggers such as Elliott at Carson Systems have pointed out that you can also get to the in-the-clear Firefox passwords through the Options/Preferences menu item. The solution, as has been pointed out, is to configure a master password in Firefox.
It is certainly a problem that users aren't even aware that the passwords are being stored in the clear locally by the browser, so that any passing person can view them with a couple of clicks of the mouse. Also, I doubt if users are aware of the implications of the different password management options which are represented on the American Airlines login screen above:
Option 1: Firefox "remember my password" will allow others to easily see your password, unless you set a master password. Few users will ever know to set this master password.
Option 2: Using AA's "remember my username" feature will a pointer to the username on your machine using a cookie (i.e. the actual username is not present in the cookie, it's a pointer to a username stored at AA.com). No password is stored locally.
Option 3: "Email me my password". This sends a temporary password to the email address associated with your username, and you must then choose a new password.
Do users know the security differences between the three options above? I suspect not, since usability is the key factor in the choice.
When Firebox pops up that bar (in the top image above) with the "Remember" button, it should also show a "Manage Passwords" option too.
[incidentally, AA.com is down now, which makes writing this post more difficult. They seem to be having some serious issues at the moment]
Tuesday, August 5, 2008
The standard Use Case used in demos of the Netstructure XML Director box was "Here is an Purchase Order in XML. If the total value of the Purchase Order is above $X then we route it to this location. If it is less than $X we route it somewhere else". Although the Intel NetStructure XML Director was discontinued, that use case continues to come up again and again in demos.
This demo often used XPath Quantifiers to total up the Purchase Order at the XML Gateway. This allowed some of the more fancy aspects of XPath to be demo-ed. By "fancy" i mean aspects where you would imagine you'd need a DOM, and therefore you can show a nice speed difference between an appliance and some DOM-using open source software. If you used a simpler example, then a viewer may ask "could that not be optimized in software by avoiding DOM processing?".
However, whatever the motivations behind using this use case, the fact is that it is not often used in real life. I think that people assume that is what XPath processing on an XML Gateway is for. However, that is not the case.
Firstly, if you're making decisions based on Purchase Order value at the XML Gateway, then you are putting business logic out there. This is not necessarily a bad thing [since it will run more fast and more secure on an XML Gateway than on an app server].
However, the most common uses of XPath are:
1) As an underlying technology of WS-Addressing, WS-Security, et al. It is use to selectively pick out the data (e.g. routing locations) which is used for those higher-level standards.
2) To target XML data for encryption. For example, in medical "de-identification" uses of Vordel XML Gateways, we are selectively encrypting "identifying" patient data which we find inside XML messages on the network and at the Web Service endpoint. Xpath is one of the ways to find this identifying information.
3) To target XML data for signing. Here XPath is used to narrow down exactly what part of the message must be signed.
4) In the course of XML Enrichment [the name given to the practice of looking up contextual information which is then embedded into the XML message. For example, one of our mobile telecoms customers uses our XML Gateways to look up subscriber information in databases and directories and then to insert it into XML messages on the fly on the network, using XPath to signifiy the location. Then, the task of looking up this information is offloaded from the application server. It is an example of XML Offload.]
The lesson is that a device doing XPath alone doesn't make a whole lot of sense, even if it does the more fancy aspects of XPath such as quantifiers. We use highly optimized XPath a lot, but as an underlying technology for many higher-level technologies.
Monday, August 4, 2008
Sunday, August 3, 2008
Here is the original posting:
Monday, January 10, 2005
While TCP-tracing some SOAP messages recently, I noticed a bunch of connections originating from my laptop to computers around the world. A quick examination of my personal firewall logs showed that these were being initiated by Skype. I noticed that Skype was connecting out to personal computers in Spain and the Netherlands.
If you're running Skype, run the netstat command from the command-line and see for yourself. To understand what these connections are, you have to look into how Skype works. Skype has a "For Geeks Only" page which hints at how it works, but I think the real geeks will want to look at this excellent presentation: http://mnet.cs.nthu.edu.tw/paper/Chance/041125.pdf
My laptop is behind a NAT firewall which issues private non-routable IP addresses. This means that outsiders cannot initiate TCP connections to me. It happens that all the people with whom I communicate using Skype are also behind NAT firewalls. My copy of Skype can't open a port to them, and they can't open a port to me. In a centralized system, we would both connect to a central "Skype Server" (probably in Luxembourg, where Skype is headquartered). But, Skype is a decentralized P2P system, with no central servers. So, my laptop opens connections to a small number of other Skype users who have public IP addresses. These are known as "super nodes" in the Skype network. As the presentation (linked above) puts it, "Any node with a public IP address having sufficient CPU, memory and bandwidth is a candidate to become a super node". Right now, I am connected to a "super node" computer in the Netherlands. If I initate a call to a Skype contact, our call is routed through the same "super node", using their bandwidth. But their CPU is not necessarily used for the voice encoding, instead Skype holds an "election" and the fastest CPU of the three (two Skype clients and one "super node") gets the job of doing the encoding.
So, in Skype's network, computers with public IP addresses carry the weight. Additionally, a user who allows incoming TCP connections will experience better call quality, because it cuts out the middleman of a "super node". Skype's FAQ says as much, "
In the quest for even better voice quality, it is also advisable to open up incoming TCP and/or UDP to the specific port you see in Skype Options. This port is chosen randomly when you install Skype. In the case of firewalls, this should be easy to arrange. In some routers, however, you cannot configure incoming UDP at all (but you still can configure incoming TCP port forwarding, which you could/should do). " http://www.skype.com/help/faq/technical.html
This is an carefully worded paragraph. The "quest for even better voice quality" is as much for the Skype service as a whole, as for the individual user. In a clever reverse of the "Selfish Gene" thesis, a selfish motive winds up achieving an altruistic end. In fact, the use of the word "should", regarding configuring TCP port forwarding, suggests an altriustic motive - you "should" pull your weight and become a super-node. If we were all behind NAT firewalls, the whole Skype system wouldn't work (nor would Kazaa, with which Skype shares architecture).
Friday, August 1, 2008
A commenter on soc.culture.irish remarks that Cuil means: ‘eagerness, fearsomeness, a gnat, a horsefly, a beetle, a bluebottle, and (with the addition of a fada) a rear end, a reserve or backup, a corner, and an arse. The one thing it isn’t, according to the four dictionaries I just checked, is knowledge.’
Initially when I heard about Cuil's Irish-influenced nomenclature, I thought they were going after "cúl", meaning "goal", as in Gaelic Football or soccer. And sure enough on Monkeyfilter there is a comment about "cúl" for "goal" in football, with an adjectival form cúil" [as well as meaning rear-end in both French and Irish].
But, it seems the influence for the name came from the old Irish legends of Fionn McCumhaill (surname pronounced "Mac Cool"), who gained knowedge from burning his finger while cooking a hazlenut-eating salmon.
In any case, Vordel is on Page 1 of Cuil's search results for XML Appliances so that's good :-)
The article explains who to check an XML Signature which has been generated on the client side by Microsoft .NET.
What is good about the article is what is doesn't say. It does say that you can use XML Signature verification in order to test the integrity of incoming XML messages (i.e. check that they haven't been changed) and also to check that the signer was trusted. It does not say that you can use XML Signature checking to authenticate the client. Remember yesterday's post on replay attacks in Web Services. If you are only checking the XML Signature of incoming messages and deciding "if it's signed by a trusted client, we let it in", then an attacker could get a hold of a signed message and replay it. That is why, if you want to use XML Signature as part of an authentication system, you must include something which changes per message (like a timestamp or nonce) and then sign that also. You can't just sign the SOAP Body if you want to use XML Signature in an authentication context. This is well known and is in the WS-I Basic Security Profile document. But still, it's common to run into situations where a naive developer may think "I will authenticate clients to my Web Service by validating an XML Signature over the SOAP Body" [i.e. "I will perform some complex CPU intensive processing on every single incoming message, before I know if I should trust it or not"].
If you also include a Timestamp and sign it (as the Microsoft .NET WSE and WCF toolkits do), then at the XML Gateway side you can use a Filter, such as the filter shown below, to validate the Timestamp. This is shown in the Policy Studio for making policies to govern Web Services usage.
So remember, XML Signature used on its own is for integrity checking. If you want to use it for client authentication, you need to consider replay attacks and you need to implement a policy in line with the WS-I BSP guidelines (including a nonce or timestamp, signing these, etc).
They don't go into specifics, but it's long been well known that one vulnerability relates to the famous "Defective Sign and Encrypt" paper of Don Davis. That is, many Web Services platforms allow you to configure a policy which will sign the body of a SOAP message, using WS-Security, and then encrypt the body of the SOAP message, again using WS-Security. What is wrong with that?, you may ask. Well, a couple of things. Firstly, it means that the digest part of the signature is unencrypted, since it's up in the WS-Security block in the SOAP header and therefore escapes the encryption of the SOAP body. An attacker can use the digest to mount a plaintext-guessing attack on the SOAP body. Plaintext attacks in the world of Web Services are aided by the fact that most Web Services platforms expose WSDL for services by default, and that WSDL generally includes a Schema. The Schema gives the structure needed for the plaintext-guessing attack.
Timestamps are another common issue. Developers often do not understand what a replay attack is, assuming that it is something like a Denial-Of-Service attack (which, understandably, also may include a replayed message). But, a replay attack involves a valid message being obtained by an attacker, then replayed to a Web Service. This valid message may include a valid username/password combination, or a valid username and password digest combination, or a valid XML Signature. Often, a Web Services platform will be setup to validate incoming Usernames+Passwords, or validate an XML Signature and check that the signer is trusted. In these cases, without timestamp checking, the Web Services platform is vulnerable to a replay attack. A nonce ("number once") can also be used to block this attack.
Of course, XML Gateways such as Vordel's XML Gateway block these attacks, and add a level of security in front of the Web Services platform, even if the Web Services platform is misconfigured. And on the client side, you can test for Replay Attack vulnerability by using the Vordel SOAPbox as a testing client, create a WS-Security UsernameToken message or a WS-Security X.509 Certificate Token message, and simply send it through twice.
As a footnote, I should mention here Microsoft's "Project Somoa". This validates policies in .NET for security weaknesses. The Web page for Somoa on the Microsoft Research site seems to imply that it hasn't found its way into WCF yet [people from Microsoft please correct me if I'm wrong!].