Monday, September 17, 2012

Web Service Security: Threats & Countermeasures

 

Denial of Service (DoS)


Oversize payload / Recursive XML

<attack1>
  <attack2>
        .... nested 10000 elements
            <attack10002> .... big data ....  <attack10002> ....
Countermeasure: limit the message size with gateway/firewall, XSD restriction length, limit nested element deep, don't use maxoccurs="unbounded" in XSD.
While we can also limit the message using application-server setting or XSD validation in the proxy, it's better to reject the messages  as early as possible (e.g. in the gateway with XML firewall) before the message burden the load balances and application-servers.
Use throttling (also in the log file generation).

Entity Expansion / XML bomb

Excessive/recursive reference to entity to overwhelm the server, e.g.
<!DOCTYPE s[
<!ENTITY x0 "hack">
<!ENTITY x1 "&x0;&x0;">
... Entities from x1 to x99... 
<!ENTITY x100 "&x99;&x99;">
]>
...
 <soapenv:Body>
  ...
  <s>&x100;</s>
Countermeasure: reject message with <!ENTITY> tag (or whole DTD tag), use SOAP 1.2, use XML firewall.

XML External Entity DOS

Entity reference to external resources (e.g. a huge file) to overwhelm the server, e.g.
<!DOCTYPE order [
<!ELEMENT foo ANY >
<!ENTITY hack SYSTEM "http://malicious.kom/bigfile.exe" >
]>
...
 <soapenv:Body>
   ...
   <foo>&hack;</foo>
Countermeasure: reject message with <!ENTITY> tag (or whole DTD tag), use SOAP 1.2, use XML firewall.

Malformed XML

To overwhelm the server with exceptions, e.g. omitting XML closing tag or wrong date-time format.
Countermeasure: XSD validation.

Weak XML definitions

e.g. <any> element which allows any additional elements
Countermeasure: prevent the use of <any>.

Buffer overflow

Oversize message to override variables / operation address, DoS attack
Countermeasure: use programming language/frameworks which is more safe regarding buffer overflow (e.g. Java), bounds checking.

Non-content attacks

The DOS attacks described above mainly are content-based by sending malicious / oversize contents. But web services are indirectly also vulnerable to non -content attacks (e.g. SYNC-flood) that will overwhelm the network infrastructure (firewall, switch/router).
Countermeasure: using firewall/switch/router with anti DOS filtering features such as TCP splicing/protocol analyzer, bogus filtering, anomalies detection, rate limiting.


Command Injection


SQL injection

Manipulate the parameters such that it will run a malicious sql statement in the database.
e.g. <password>' or 1=1 </password>
Countermeasure: XSD validation, sanitize

Xpath injection

e.g.
//user[name/text()='Admin' and password/text()='' or '1' = '1'
or use union |  to extend query.
Countermeasure: XSD validation, sanitize


XML Injection

Web service input:
Username: tony
Password: Un6R34kb!e</password><!–
E-mail: --><role>admin</role><mail>s4tan@hackers.com

The result in the xml database:
<user>
    <username>tony</username>
    <password>Un6R34kb!e</password><!--</password>
    <role>guest</role>
    <mail>--><role>admin</role><mail>s4tan@hackers.com</mail>
</user>
So I change the default role guest to admin.

Countermeasure: XSD validation, sanitize (e.g. encode <,>)

XSS using CDATA Injection

Vulnerabilities when you use display the WS responds to web page or evaluate the responds as Ajax objects, e.g.to reveal sessionID in the client cookie:
<![CDATA[<]]>script<![CDATA[>]]>alert(document.cookie) <![CDATA[<]]>/script<![CDATA[>]]>
Countermeasure: XSD validation, sanitize (e.g. encode <,>)

Execute binary files or system call command

The attack methods above (e.g. SQL injectrion, XML injection) can be used to run system commands using the databases / XML processors features (e.g. XSLT exec())
Countermeasure: XSD validation


Malicious Reference

Routing Detour

The attacker change the reference address in http-header/WS-Routing/WS-Addressing, e.g.
<wsa:ReplyTo>
  <wsa:Address>http://hackersWS</wsa:Address>
</wsa:ReplyTo>
Countermeasure: SSL


Reference Redirect

Reference to malicious external reference. e.g.
<sig:Signature>
  ....
  <sig:Reference URI="http://maliciousweb/VERYBIGFILE.DAT">
Countermeasure: prohibit reference to resource outside the document.

Impersonation

Malicious/ web service with the similar interface (wsdl)
Countermeasure: protect the web service reference from man in the middle attack with SSL. Use certificate authentication.

Authentication (WSS or transport-level)


Weak password

The attacker guest the password (e.g. using brute-force / dictionary attack)
Countermeasure: use stronger authentication (e.g. certificate based,  multi factor authentication), enforce strong password (e.g. minimum length & character sets), lockout account after multiple authentication failures, don't give clue to the hackers e.g. "valid username but wrong password".

Reply attack

The attacker capture the authentication token (e.g. password, session-token) and then reuse it in his request.
Countermeasure: one time nonce/password digest, SSL, use certificate-based authentication


Authorisation


URL transversal attack

e.g. the hacker knows the Restful WS endpoint
GET http://library/booklist/?title="hacking"
the attacker might try
GET http://library/secretdocumentlist/?title="hacking"
Countermeasure: ACL on the URL tree.

Web parameter manipulation attack

REST WS e.g.
GET http://library/secretdocumentlist/?role="employee"
GET http://library/secretdocumentlist/?role="boss"
Countermeasure: ACL. Don't make security decision base on URL params (sessionID, username, role) .

Illegal Web method

e.g. The attacket know the Restful-WS url for GET operation to get the data, he can try POST operation to modify the data.
Countermeasure: ACL for method access.


Encryption


Weak cryptography

Countermeasure: Use well-proven encryption algorithms (e.g. AES) in well-proven libraries instead of inventing and implementing your own algorithm. Protect your key.

Failure to encrypt the messages

You don't use encryption, the attackers can capture your authentication token and use it to impersonate you.
Countermeasure: Use encryption (e.g. SSL or WSS & XML-Encryption)

Messages are not protected in the immediateries

You use point to point encryption SSL but inside the intermmediateries  your message is decrypted. The immediateries can read your sensitive data and use it for his advantage.
Countermeasure: Use end to end encryption (WSS & XML-Encryption)

Data tampering

An attacker modifies your message for his advantage.
Countermeasure: signature and encryption (WSS & XML-Encryption)

Schema poisoning/ metadata spoofing

Maliciously changing the WSDL (e.g. to redirect the service address to malicious web, to manipulate data types, to remove security policy) or manipulating the security policy document (to lower security requirement), e.g.
<wsdl:port name="WSPort" binding="tns:WSBinding">
  <soap:address location="http://hacker.kom/maliciousWS"/>
</wsdl:port>
Countermeasure: check the authenticity of metadata (e.g. signing), use SSL to avoid man in the middle attack

Repudiation

A client refuses to acknowledge that he has misused the user-aggreement (e.g. perform dictionary attack against web-service authentication).
Countermeasure: keep client message signature in the log. Protect the log files.



Infomation disclosure



WSDL disclosure

WSDL contains many information for the attacker (operations, message format).
Countermeasure: protect the wsdl endpoint with ACL/firewall. Use robot.txt to avoid the wsdl appears in google.

UDDI disclosure

UDDI gives the attacker information about wsdl location.
Countermeasure: don't publish the wsdl in UDDI

Error message

Attacker send failure messages/DOS attack such that the web service will return error messages which can reveal information (e.g. database server address, database vendor).
Countermeasure: don't publish sensitive information (e.g. connection string) in the error message. Sanitize error message (e.g. the stacktrace)


Testing Tools

• SOAPUI
• WSDigger
• WSFuzzer



Security checklist:

http://soa-java.blogspot.nl/2012/09/security-checklists.html


Web service message level security WS-Security (WSS) and transport level security (TLS):
http://soa-java.blogspot.nl/2013/04/web-service-security-message-level-vs.html


Please share your comment.

Source: Steve's blog http://soa-java.blogspot.com





References:

• SOA Security by Kanneganti





Oracle Service Bus 11g Development Cookbook by Schmutz & Biemond et.al.





Developing Web Services with Apache CXF and Axis2 by Tong


• Ws-Attacks.Org
• Web Service Hacking, Progress Actional Whitepaper
• OWASP Web Service Security Cheat Sheet
• Attacks on Web Services By Bidou
• Web Services Security By Negm, Forum Systems Inc.
• OWASP Top Ten Web Services Vulnerabilities By Morana
• Http://Www.Soapui.Org/Soap-And-Wsdl/Web-Service-Hacking.Html
• NIST guide secure web service
•  http://clawslab.nds.rub.de/wiki/index.php/XML_C14N_Entity_Expansion
•  http://clawslab.nds.rub.de/wiki/index.php/XML_External_Entity_DOS
http://projects.webappsec.org/w/page/13247004/XML%20Injection
• http://clawslab.nds.rub.de/wiki/index.php/Routing_Detour
• http://clawslab.nds.rub.de/wiki/index.php/Reference_Redirect

Tuesday, September 11, 2012

Security Checklist


This list is mainly for developers, but can be useful also for architects, security managers and testers. Mainly from design and coding perspective, enriched with configuration, operational and human process aspects.

Please see also "Web services security threats": http://soa-java.blogspot.nl/2012/09/web-service-security-threats.html

This is a part of the blog series about (SOA) software guidelines. For the complete list of the guidelines (i.a. about design, security, performance, operations, database, coding, versioning) please refer to: http://soa-java.blogspot.nl/2012/09/soa-software-development-guidelines.html

General design principles

• Prefer to use policy based declarative security instead of programmatic security: separation between security configuration and business code. Beware that both business code and the security configuration typically have different life cycles and implemented/managed by different people.
• Use declarative security instead of programative, separation between application logic and the cross-cutting concerns (e.g. security, logging).
• Prefer to use message level / end-to-end security (e.g. WSS) than transport level / point-to-point security (e.g. SSL): to protect the messages in the intermediate services and flexibility to protect only portions of the messages (due to performance).
• Does the service/data need authentication, authorization, signature/non-repudiation, encryption?
• If the web service is used to wrap a legacy service: aware about vulnerabilities of the legacy service, aware about how to reconcile the security model (e.g. credentials/roles mapping) or some legacy application doesn't have any security provisioning at all
• Use white lists instead of black lists
 Throttling the requests / messages-size to prevent DoS
• Defense in depth: don't rely on a single layer of security (e.g. apply also authentication & SSL instead of protecting the infrastructure with firewall only)
Check at the gate (e.g. validate and authenticate early)
• Secure the weakest link
Compartmentalize: isolate and contain problems e.g. firewall/DMZ, least privileged accounts, root jail.
• Secure by default e.g. close all ports unless it's necessary
• Communicate the assumptions explicitly e.g. firewall will secure all our internal services with no ports open to outside world
• Understand how the infrastructure restriction (e.g. firewall filtering rules, supported protocol, ports allowed) will affect your design
• Understand the organizational policies/procedure (e.g. what applications and users are allowed to do) so you don't have acceptance problem by production team because your services breach these policies
• Understand the deployment topology due to your organization structure (e.g. your company has many remote branches offices connected to the main server-farm via VPN)
• Understand the identity propagation / credential mapping across trust boundaries (e.g. apache web account >  weblogic web service account  > database account)
Security measures (e.g. authentication, encryption, signing) will cost performance (increasing processing cost and message size) as well as other qualities attributes such as usability, maintainability (e.g. distribution of certificates) and operability (e.g. security service / identity provider failure). So consider the trade off between security and other quality attributes regarding your company infrastructure and policies (e.g. if the firewall policy in your company is very strict, you might lessen the encryption requirement for the intern services).
• While applying security by design, I still keep the "security through obscurity" to some extent, e.g. I will not publicly publish the security architecture of my company (the endpoints/ports, wsdl/schema, libraries used, etc).


Security process & management


• Design & code review (e.g. login & logout mechanism, authorization logics in each Struts actions)
• Include security in your development process (e.g. SDL), use thread modeling during analysis & design phase.
• Make sure that your programmers and network/servers administrators capable to deal with security issues, arrange training if necessary.
• Make sure that the operational team know the contingency procedure (e.g. what to do in case of DoS attack or virus spreading in your network). Have contingency plan / crisis management document ready: procedures to where the configurations are, how to isolate, handle failures, how to restart in safe mode, how to turn-on/turn-off/undeploy/deploy modules/services/drivers, who/how to get informed, which services/resources have priorities (e.g. telephony service, logging service, security services).  Have this document in multiple copies in printed version (the intranet and printer may not work during crisis). The crisis team should have exercised the procedures (e.g. under simulated DOS attack) and measured the metrics during the exercise (e.g. downtime, throughput during degradation mode).
• Plan team vacation such that at least one of the crisis team member always available. Some organizations need 24/7 full time dedicated monitoring & support team.
• Hire external party for penetration testing and security audit.
• Document incidents (root causes, solutions, prevention, lesson to learn), add the incident handling procedures to the crisis management document.
• Establish architecture policies for your department. Establish a clear role who will guard the architecture policies and guidelines e.g. the architects using design/code review.
• For maintainability & governance: limit the technologies used in the projects. Avoid constantly changing technology while still open to the new ideas. Provide stability for developers to master the technology.
• Establish change control. More changes means more chances of failures. You might need to establish a change committees to approve the change request. A change request consists of why, risks, back-out/undo plan, version control of configuration files, schedule. Communicate the schedule with the affected parties before.


 Authentication

• Prefer to use stronger authentication (e.g. 2-way X.509 certificate authentication) than basic authentication (password based).
• If you use basic authentication use SSL or password digest to protect the password.
Credentials, authentication token / password are stored with encryption / salted hash
• Force users to use strong password and/or multi factor authentication. Use password expiration feature.
Avoid to send passwords to external application (e.g. when an external application need to access resource services), use OAuth instead.
 Disable test and example accounts.
Credentials (e.g. password, service accounts) are centralized (e.g. in an LDAP server) for better manageability. Redundancy (e.g. fail-over clusters) can be used to prevent single point of failures.
• If you use certificate based certification: always check the validity of the certificates (e.g. using CRL).
• Prevent brute-force / dictionary attacks (e.g. for add a new user webpage) using CAPTCHA, email validation, locking after max-attempts.
• Using SSO / centralized security service: users don't have to have many accounts/passwords, users don't have to share their passwords with many applications/resources, the developers don't have to maintain multiple authentication mechanisms in different systems. With federated Identity provider, you can centralized the credentials across organizations.
• Use standard security solutions (e.g. OAuth, OpenID, SAML to exchange security messages), don't reinvent new wheels. It's more risky to implement your own security solution than using a well tested solution.
• Authentication should be in the server side (instead of client side/JavaScript).
• Avoid having password as plain text in the configuration files (e.g. fstab), save passwords in the password files / credentials files and protect these files (chmod 600 and encryption if possible).
• Beware with remote OS authentication for example in Oracle database since an attacker can try to connect using a username that has the same name with the OPS$account in the database.
• Send confirmation when a user change his/her password, email, mobile or other sensitive personal data.


Session management

• Limit the life time of cookie or authentication/authorization tokens.
• Prevent replay attack by using one time nonce.
• Prevent CSRF using secret nonce as a request parameter (e.g. for OAuth) and validate the nonce in the server side, beware that a nonce cookie doesn't prevent CSRF.  Prevent CSRF by informing the user about the action (e.g. "you're about to transfer $100") and ask for reconfimation/reauthentication.
Session ID is strongly random generated, at least 128 bits length.
• Appropriate logout mechanisms (e.g. invalidate sessions, clear all cookies).
• Force user to re-authenticate for sensitive operations (e.g. change password).
Hide sessionID (e.g. in secure cookies instead of in GET url parameter or hidden form field).
• Validate the security token with HMAC or encrypt the token e.g. session ID cookies must be encrypted / using HttpOnly secure cookies.
• Limit the cookies domain & path.
• Always provide logout feature. Make sure that logout is properly done (invalidate session, remove session cookies).
• Issue a new session id for each login action (to prevent session fixation).
• Identify possible session hijacking for multiple IP addresses / geolocation which are simultaneously use the same sessionID.
• Use anti caching http header (Cache-Control: no-cache and  Pragma: no-cache).
• Use IE HttpOnly to prevent client (java)scripts query the cookies.
• Set the session timeout.
• Appropriately handle requests which indicate security check circumvention or an obvious attempt for privilege escalation,e.g. in case of requests with invalid sessionID: log the sender's IP address, invalidate the session and redirect to login page.

Authorisation

• Determine which web service operations or which web page/actions need authorization.
• Determine the privilege/roles for your service/web.
• Strong ACL (in operating system/file system level, application level, servers).
• Apply the least privilege principle, don't use admin account for daily operations (e.g. read database), create specific accounts for specific operations (e.g. CreditcardReadOnlyAccount, UpdateInventoryAccount, WebShopInventoryReadOnlyAccount).
Audit/log administrator activities (e.g. create new user, grant).
• Remove the default accounts & ACL in your system if possible (e.g. remove BUILTIN/Administrators group from SQL Server login) or rename the default (administrative) accounts if possible (e.g. sa user  in SQLServer).
• Run the server in root jail.
Centralized authorization (e.g. using OAuth) to reduce the burden of reconciling different access-right in different systems across trust-boundaries (e.g. apache  role=boss mapped to database role =  readwriteEmployeeData).
• Using ACL on URL tree and web methods allowed e.g. the REST url http://myweb.kom/myprofile should only be accessible for me & myfriends only for GET method and for me only for the PUT method.
• Use ACL to protect directory and files from transversal attack.
• Use ACL per user/session to filter direct & undirect references (e.g. links).
 Authorization check in all protected GUI operations (e.g. Struts-actions, Admin html page) and web service operations.
• Beware of the system calls feature in your framework that can be used for hostile purposes, e.g. Runtime.exec() in Java or store procedure xp_cmdshell in SQLServer. Solution: root jail, least privelege accounts, ACL, disable unnecessary features, run the application server with read-only privilege on web-root directory (e.g. Apache  nobody user).
• Make sure account lockout doesn't result in DoS.
 Check (e.g. using a white-list) all references submitted via input (e.g. webservice request, file, database).
• Avoid url-jumping (e.g. Checkout -> Delivery instead of Checkout -> Payment -> Delivery) by checking the last visited page (e.g. in session variable ).
• Remove guest account / anonymous login if it's not really needed. At least review the guest / public account, remove unnecessary privileges from this account.
 Review the ACL (& authentication credential lists) regularly to detect forgotten change actions (changed roles, departed employees)


Confidentiality, Encryption, Signing

• Encrpt/hash sensitive data e.g. bank-accounts in the LDAP production copy used for development/test.
• Use message-level XML-Encryption to protect sensitive data in the intermediaries / external proxies / clouds. The point-to-point SSL doesn't prevent the intermediaries to read the sensitive data. With WS-Encryption it's also possible to encrypt only a part of the messages, thus more flexible (e.g. in case the intermediary proxy need to peek the unencrypted part). The message-level security (e.g. WSS Authentication, XML-Encryption, XML-Signature) is independent to the protocols thus it offers more flexibility to send SOAP messages across different protocols (e.g. http, jms, ftp).
• Use signature and saved logs for non-repudiation
• Use signature for message integrity
• Protect the key (e.g. don't backup the key and the encrypted data in the same backup-tape)
• Use well-proven encryption algorithms (e.g. AES) in well-proven libraries instead of inventing and implementation your own algorithm.
• Don't register sensitive services to UDDI
• Use robot.txt to avoid the sensitive files (e.g. WSDL, source codes, configuration files, confidential documents)  appears in Google.
• Don't store secrets in the client side (e.g. hidden form field, cookies, HTML5 storage). If you really need to store sensitive data in the client (or to pass them in the message): obfuscate the name and encrypt/hash the value. Beware of persistent cookies (the information will be written to the file system hence can be read by malicious users)
• Secure backup (e.g. with encryption), store it in a secure place.
• Avoid mixed SSL - nonSSL web sites (it causes user warning in the browser and can expose user ID.) Use CA-valid certificates (to avoid user warning in the browser).
• An example of deployment pattern: using DMZ proxy servers between outer and inner firewall to expose (enterprise) services to public. The servers in de DMZ are treated as bastion hosts, special attentions are given to protect these servers against attacks.
• Load sensitive data on demand, clear them from memory as long as you don't need them anymore. Don't keep/save them (e.g. in session variables or cache) if it's not really necessary.
• Use enough key size. Securely distribute, manage and store the keys. change the keys periodically.

Coding

 Limit accessibility: e.g. declare classes/methods/fields as private instead of public
• Declare sensitive classes/methods/fields as final so they can't be overwritten
• Don't write secrets in the code (e.g. database connection string), beware that the secret strings in the compiled classes can still be read using reverse engineering tools
• Remove test code, example code, example database
• Using framework/library functions can be more safe than building your own function (e.g. using jquery .ajax to process json response from ajax calls instead of plainly using eval). But make sure that the third-party libraries that you use is save (e.g. code review) and follow the security newsgroup for that library.
• Use jsp comment tag instead of html comment to avoid the code comments will be visible to the client
• Use prepared statement for querying database to protect against sql injection (and better performance).
• If you need to redirect via url parameter consider using mapping value instead of the actual url link. Make sure that the redirect url is valid and authorized for the user.
• Beware of null insertion, e.g. circumvent  if ($user ne "root) using user="root/0" in Perl. Solution: validate inputs.
• Beware of buffer overflow attack (e.g. to override variables / operation address, DoS attack). Solution: use programming language/frameworks which is more safe regarding buffer overflow (e.g. Java), bounds checking
• Beware of race condition exploitation for example to overwrite the username of another individual's session. Solution: avoid sharing variable between sessions via global variables / files / database /registry entry.
• Use CAPTCHA to distinguish genuine human inputs from robot inputs.

Configuration / operation management

• Protect / restrict access to configuration files & admin interfaces.
Encrypt/hash sensitive configuration data (e.g. database connection, password).
 Centralized security management (e.g. OPSS for Oracle Fusion Middleware, JAAS for java applications) instead of managing different configurations spreading from GUI, web services, database.
• To prevent DOS attack: restrict message size (e.g. default 10MB in Weblogic) and set server timeout. It's better to countermeasure DOS as early as possible (e.g. in the firewall/gateway with Cisco rate limit) before the load balancers & application servers.
• Run application-servers/database/ldap with minimum privilege, avoid running the server as root.
• Reduce attack surface: disable unnecessary daemons, ports, users/groups, apache-modules, network storages in the server. Disconnect network file servers if it's not necessary
• Update the OS/applicativon-servers/database/Ldap/libraries with latest (security) patch
• Remove the temporary files (e.g. hibernate.properties.old or httpd.conf.bak)
• Audit/scan regularly for new vulnerabilities. It's not enough to do penetration test only during the first acceptance-test since the attack surface can grow with time.
• Follow the security newsgroups/websites (e.g. BugTraq), discuss the potential new threats with your security manager.
• Monitor the system lively for early detection of anomalies (e.g. multiple malicious logins from a certain IP address, unusual frequent web/soap requests to a certain url). Use Intrusion Detection System (IDS).
• Change the default application ports. Close the unnecessary ports with firewall.
• Minimize the allowed IP address source by using firewall, Apache httpd file, Weblogic connection filter.
• Use separate environments  production, test, development, sandbox playground (e.g. to test prototype or try new viral algorithms.) Each components in these environments have different credentials than in other environment. If the data in the test & development are based on production data (to make the test more realistic) the sensitive production data should be masked.
• The (security) test configuration should be identical to the production (e.g. firewall configurations, networks topology, timeout setting), for example you can use VMWork's LabManager to achieve this.
• Hide server information in http header (e.g ServerSignature Off in apache.conf).
• Turn off http trace feature (e.g. using Apache mod_rewrite), turn off debugging feature in production.
• Centralized security management (e.g. in case of Weblogic infrastructure: using OWSM with security policies) for  better manageability, reduce mistake.
• Use configuration change detection system (e.g.  monitor admin activities log files, Tripwire.)

Data

Minimum data presented to any business request
• Don't blindly trust input data (from clients GUI/cookies, database, web service request), so always validate and sanitize the input
• Validate/preprocessing (to prevent code/SQL/command injections, XSS, DoS) in sequence:
    o canonization: transform different representation (e.g. %5c is "/" which can be used for directory transversal attack)  to a canonical form.
    o sanitation: encode/escape unwanted characters (e.g. &lt for <).
    o data validation: validate based on white lists (e.g. XSD that defines data type/format/range).
• To prevent DoS, XML bomb: limit input size (e.g. web service request, file upload via GUI) using gateway/server configuration, XSD restriction length, limit nested element deep, don't use maxoccurs="unbounded" in XSD. While we can also limit the message using application-server setting or XSD validation in the proxy, it's better to reject the messages  as early as possible (e.g. in the gateway with XML firewall) before the message burden the load balances and application-servers.
• No security decision based on url params (which can be manipulated by clients).
• Validate & sanitize output (e.g. web, database) to prevent XSS, code injections.
• Use output encoding for special characters (to prevent XSS, code injections).
• Beware of double-encode attack (e.g. \  > %25 > %255c  ).
• Do not store sensitive data in cookies.
• How to validation data input (from user input, database, external system)? How to handle validation-error situation?
• Avoid sensitive data in de code/scripts, config files, log files. Restrict access to these files (least privelege principle).
• Encrypt sensitive data (e.g. employees' bank account published by LdapService).
• How do you prevent data fishing (e.g. limit output)?
• Use XML-firewall: faster (dedicated hardware), delegate the burden of SOA/OSB servers for validation.  Reject the messages earlier for better containment and preserving performance: threat should be addressed at the edge of the network instead of at the application layer.
• Reject SOAP message with <!ENTITY> tag (or whole DTD tag) or  use SOAP 1.2 to protect against entity attacks.
• Reject SOAP message with CDATA to avoid CDATA injection.
Attachment/files upload:
    o limit the size
    o the files must never be executed/evaluated
    o anti virus check

Error handling

• Prevent sensitive information (e.g. server fingerprinting for hackers) in the error messages. Generalized error message (to hide the implementation technology) instead of just passing the original error string from the framework (e.g. Java stacktrace).
• Don't put human information (e.g. developer's name) in the error message to avoid social engineering exploit.
• Test and understand the behavior of your system in case of failure / error.
• Catch all possible errors / failures  and handle gracefully to avoid DoS.
• Appropriate privilege level is restored in case of error / failure e.g. invalidate the session
Security mechanism is still working in case of error / exceptions / DoS attack
• Release resources (e.g. file, database pool) in case of error to prevent DoS.
• Centralized error handling.

Logging

• Log and monitor sensitive operations (e.g. create user, transfer money).
Protect log files / other files (e.g. history) which can be useful for forensic investigation using ACL, use signature if necessary.
No sensitive information (e.g. password) in the log, check the regulations (e.g. SOX in the US, WBP in the Netherlands).
• Information in log: userID, action/event, date/time (normalized to one time zone), IP address.
Throttling the log to prevent DoS or  evidence removal using log file rotation.
• Centralized logging and standardized the logging information.
• Audit logging regularly to detect malicious attempts using an automatic alert system. What information need to observe signs of malicious activities? e.g. number of connections per requester IP address.
• Validate and sanitize if you log the input (GUI form input, web service request,  or external database).
•  In case of attack what trail of forensic evidence is needed (e.g. IP address of the attack messages).
Know your baseline (typical log file growth in normal operation), plan log backup/removal and log rotation accordingly.


Please share your comment.

Source: Steve's blog http://soa-java.blogspot.com


References:


• Hacking Exposed Web Applications by Scambray et.al.


• How to break web software by Andrews & Whittaker

• OWASP Code Review Guide
• Improving Web Services Security (Microsoft patterns & practices) by Meier et.al
• XSD restrictions http://www.w3schools.com/schema/schema_facets.asp
• ISO 27001, ISO 27002

Weekly Status report template


Progress this week (planned & actual begin/end/duration)
• Unplanned activities this week (begin/end/duration)
• Pending: planned this week but not yet completed (and the reason e.g. dependencies to previous not-yet-solved bugs, coding which is not finished yet)
• Planned activities next week (with priority lists)
• Change requests (e.g. requirement change/additional requirements)
Issues (new issues, pending issues/bugs with severity lists, dependencies, roadblocks,  ESCALATION) and risks e.g. unavailability of resources (holidays), extra resources needed, firewall adjustment by infrastructure team.
Impact to the overall delivery deadline, impact to the planning of next activities (e.g. acceptance test by users)

Remember KISS (Keep it simple) principle, you don't have to write all these items every time. Your manager has no time to read your long weekly report.




Please share your comment.

Source: Steve's blog http://soa-java.blogspot.com


Further reading:
http://workarrow.com/burn-your-weekly-status-report-on-second-thought/

Monday, September 10, 2012

Test Checklists



Notes:
• This is a continuation from the blog about Development Test http://soa-java.blogspot.nl/2012/09/development-test.html
• For these checklist items sometime I use questions instead of mandatory compliance checks (e.g. "how to setup test data" instead of "checklist: test data should always via database"). The goal of the checklist is to instigate our mind to be aware to certain issues, not to force to a specific/narrow solution. The "best" choice is depend on the project context (e.g. test goal, security environment, etc.).
• The symbol "»" in the begining of the line means that the item is relatively important.

Test Plan template

• Datum, version, test objectives & scopes  (e.g. functional requirements acceptance test, security penetration system test, performance unit test)
• Process definitions (can be defined in the dev-team level, so don't have to be written in each test plans): metrics (e.g. #bugs & severity), defect classification, exit criteria, defect management, defect reporting (e.g. Trac), deliverables (e.g. test case library), if review or approval is needed for this test plan (e.g. test manager, clients).
• Assumptions (e.g. firewall and server configurations mimic the production environment)
Pre-condition for the whole test cases e.g. licenses for software, database production is cloned in the LabManager/virtual machine test environment
• » For each test cases:
     o Test case name and short description
     o Traceability  with requirement/usecase docs (i.e. the requirement ID)
     o Preconditions for this test case (e.g. certain data states in the database, certain inputs from mock web services)
     o Test steps and inter-dependencies with other test cases: e.g. fill-in employees' salaries steps: ....., dependency:  add new employees (test case#1)
     o Input data e.g. birth date 31-2-1980 (which is an incorrect datum)
     o Expected results
     o Part of system (e.g. GUI/presentation tier)
     o Area (e.g. security, functional, performance)
     o Test method (e.g. manual, unit test)
     o Priority / risk / test effort / test coverage (e.g. high, low)
• » Resources:
     o roles, who will build/execute the tests and how many man-hours needed (including external resources & trainings needed due to skills-gap)
     o server/database/software/tools/hardware needed
• Schedule/plan

Test Report template

• » Test date, version, tester name, artifact (which jar, svn revision), test environment (which server/LabManager), test code version (svn rev)
• » Test objectives & scopes  (e.g. functional requirements acceptance test, security penetration system test, performance unit test
• »  For each test result:
     • Test result ID number
     • Traceability (test case ID number in the test plan, requirement ID number in the requirement docs)
     • Expected result e.g. web service respond time below 2 seconds (average) and 5 seconds (max).
     • Actual result and impact, e.g. result: the web service respond time is 90 seconds, impact: the user waiting time with GUI is 2 minutes (unacceptable according to the SLA)
     • Status:
          • Ok/green: tested ok
          • Bug/red(high priority)/yellow(low priority): defects, a ticket has to be made in bugzilla/trac (with priority level & targeted version/milestone)
          • No-bug/gray: won't fix, false-positive
          • Hasn't been tested/white
     • Follow-up actions (e.g. reworks by developers)

     • Part of system (e.g. GUI/presentation tier)
     • Area (e.g. security, functional, performance)
     • Priority / risk  (e.g. high, low)
 • Root causes analysis and recommendations e.g. excessive bugs in authentication classes, root causes: inadequate knowledge, recommendation: training, code review.
• Resources (roles, planned & actual man-hours)
• List non-testable requirements e.g. the GUI should be beautiful.

Weekly Status report

Please see http://soa-java.blogspot.nl/2012/09/weekly-status-report-template.html

Test data

• » How to setup test input data (e.g. via database copy or DDL-DML database scripts) each time we setup a new Labmanager/test environment.
• » Make test cases for: too little data (e.g. empty input, null), too much data, invalid data (wrong format, out of range), boundary cases
• » Make sure the positive cases have correct data (e.g. validated according to xml schema, LDAP attributes & tree structures are correct)
• » How to mask sensitive test data (e.g. password, bank account)
• » How realistic the data are?
• How to collect / create test input data (e.g. sampling the actual traffic from jms topic or populate fake customers data using pl/sql).
• How to recover/reinitialized data after test (to fullfill the precondition for the next test)
• How to maintain / versioned test data (i.e. test data for current version and for the next software version)
• How to collect and save the test result data if needed (for further test or analysis)

Functional & Design

• » Test that the product correctly implements (every) requirements and use-cases (including alternative use-cases)
• » The product works according to the design and its assumptions (e.g. deployment environment, security environment, performance loads)
• » Test the conformance to relevant standards: the company standard/guideline as well as common standard such as Sarbanes-Oxley (US) / WBP (Netherlands)
• Test that (every) functions give correct result ( including rounding-error for numerical functions)
• Test that (every) application logics (e.g. flow control, business rules)

Performance test

• » Find out the typical data traffic (size, frequency, format) & number of users/connections in the production
• » Response time (UI, webservice) / throughput (web service, database) meet the requirements/SLA.
• » Load test: at what load the performance degrades or fails
• » Stress test: running the system for long time under realistic high loads while monitoring  resource utilization( CPU/memory/storage/network) e.g. to check memory leak, unclosed connections,tune timeout, tune thread pools.
• In case of unacceptable performance: profiling the system parts that affect the performance (e.g. database, queue/messaging, file storage, networks).
• Scale out (capacity planning for future) e.g. 3x today peak usage
• Test the time to complete of offline operation (e.g. OLAP/ETL bulk scheduled every night). Is the processing time is scallable? What to do if the bulk operation doesn't finish yet at 8.00/working hours?
 Rerun the performance test periodically in case of changes in usage patterns (e.g. growing number of users), change configurations, addition of new modules /services. So we can plan the capacity ahead and prevent the problems before it happens.

Realibility test

• Test (every) fault possibilities, test behaviour & error messages when an exception/failure occurs (e.g. simulate network failure or url-endpoint connection error in the configuration plan)
• Test that faults don't compromise the data integrity (e.g. compensation, rollback the transaction) and security. Data loss should be prevented whenever possible.
• Test failover mechanism, check the data integrity after failover.


Environment/compatibility test:

• » Tests for different browser (for UI projects), application servers (e.g. vendor, version), database (e.g. vendor, version), hardware (memory, cpu, networks), OS (& version)
• » Tests for different encoding (e.g. UTF-8 中文), different time-zone, different locales (currencies, language, format) e.g. 2,30 euro vs $ 2.30, test conversion between different components (e.g. database and LDAP servers can have different date format).
• » Test file system permissions using different process owner (e.g. generate files with oracle-user & consume the files with weblogic-user during applications integration)
• » Test if the configuration files (e.g. deployment plan, web.xml, log4j-config.xml) work
• Integration test: the connections between components (e.g. the endpoints in the configuration plan)
• Install & uninstall, deployment documentation

GUI

• » All GUI messages (including error messages) are clear/understandable by end users and match with user terminologies
• » How frequent are the errors? how the system reacts to user error (e.g. invalid input, invalid workflow)? how the users recover from errors?
• All navigations/menu/links are correct
• Check whether all GUI components (menu/commands/buttons) described in the user instructions are exists
• The fonts are readble
• The GUI consistent is with user environment (e.g. web style in your organization)
• The software state is visible to the users (e.g. waiting for the backend response, error state, waiting user input/action)
• Validate de (X)HTML, CSS: Doctype, syntax/structuur valid
• Another GUI testing checklists: http://www.sitepoint.com/ultimate-testing-checklist/

Tips for organizing usability test

• Identified the test subjects
• Provide a simple test guideline & result questionnaire, beware that your test subjects may be not so technical
• Is the software intuitive, easy to use, how much training is needed when you roll out this product in the production?
• Is online help or reference to user documentation available? User documentations should be complete enough and easy to understand for the intended audience
• Attend at least one test as test participant

Coding

• Test that variables are correctly initialized
• Test multi-threading scenarios (race condition, deadlock)

Tools selection

• do any team member already have experiences with this tool
• how easy to use
• customer review, popularity, how active the discussion groups/blogs to learn
 maturity
 support
• how active the development
• memory, processor requirement
 price/open-source
• easy to install/configure
• functionality, does this tool meet the requirement of the company tests
• demo/try before buy

Security

• Authentication: login, logout, guest, password strength
• Authorisation: permissions, admin functions,
• Data overflow, huge input attack/DOS
  For more complete security checklists see http://soa-java.blogspot.nl/2012/09/security-checklists.html


Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




References:

• Software Testing and Continuous Quality Improvement by Lewis



• Code complete by McConnell


 • Department Of Health And Human Services, Enterprise Performance Life Cycle Framework, Checklist.

Development Test

Since I am involved in the design/code/test review team at my work, I want to share some knowledge with you in this blog.


Scope

This document discusses the development test (test the code by developers before sending the artifacts to the QA/test team).


Benefits of developer testing

• Reduce bug fix costs by detecting the defects earlier before the code is delivered to QA/Test team.
• Early and frequent development tests give early feedbacks to the developer team.
• The statistics and the trend charts can be useful for the management team to assess the maturity/reliability of the product and if early actions are necessary to correct the process. For example it would be late when you found that you need to hire a security expert if you have sent the product to the external party with security bugs on it. The QA manager can use the test statistics to decide whether or not to accept the product from the development team.


Define the process

• Determine how the developer do the test in the software process, e.g.
   o build the test before the developer start coding (TDD/agile),
   o run the test after the code is mature enough before delivery to the test team (waterfall)
   o perform automatic continous integration test after every SCM commit (Agile)
   o perform exploratory tests (Agile)
   o scrum demo / user test for user feedback at the end of each Sprint (Agile)
   o spiral/incremental test: run the test iteratively, add new tests for the integration of new SOA components (while keep running the previous tests as regression test) at each Scrum sprint
• Do you need a test plan / documented test cases?
• Does the test plan need to be review (e.g. completeness)?
• Define the process/how tests will be conducted e.g. automatic test (unit test, GUI Selenium), manual user test, manual exploratory tests?
• Determine entry criteria (e.g. code is mature enough)
• Determine exit criteria (e.g. approval by developer manager, approval by QA manager that the code is mature enough to be delivered to the QA/test team)
• Determine metrics (e.g. error list with severity & type)
• Are tools available to assist test process (e.g. SOAPUI test, yslow)?
• Determine defect reporting/communication channel: how to report test results (e.g. Trac, bugzilla), how to archive test cases & results (e.g. svn, wiki), defect management (e.g. how to track the test status, reworks en retesting)
• Determine who will play the tester role. You may have several testers assigned to specific areas (e.g. security developer specialist for penetration testing, or invite customers for use cases testing).
• Determine the time needed to develop and perform tests. Discuss the time/plan with project manager / team lead to obtain management support. Schedule the meetings. Set time-limits.
• Do the tests, register the anomalies.
• Discuss whether or not a fix is needed.
• Discuss the fix, decide which version the fix should be done, who will do the reworks, estimate/plan the reworks
• Determine the exit decision e.g. re-inspection after required reworks, minor reworks with no further verification needed.
• Reschedule the follow up/ reinspection for reworks.
• Collect "lessons to learn" to improve the development process.
• Do you need permission or need to inform other department? (e.g. you'd better seek permission from the infrastructure-manager before bombing the servers with DoS penetration test or performance stress testing). This is also in the case of the red-team  testers (which perform penetration test without pre-knowledge about the system and without IT staffs awareness), always seek the permission from the management first.


Best practices

• Test is performed not by the developer who implements the code: to avoid blind spots, objective, to make sure good documentation.
• Determine how to share the test codes with other developers & the QA team (for code reuse and reproducible results). Reuse tests with test libraries / knowledge repository (e.g. test case library). Use version control (e.g. svn)
Regression test: rerun the past tests to detect if the current fix has introduced new bugs
Automatic tests is better than manual test: repeatable, less error-prone, more efficient to run and reuse, can be run frequently (e.g. continous integration)
• Discuss with your client / user for realistic scenarios when defining the test data
• Find-out the typical use (e.g. the average message size, how many requests per minute) by asking the users
• Find-out the typical failures in the production (e.g. network outage) by asking the production-team
• Find out the typical environment/configuration in the production (e.g. browser, OS). Do you need to consider old environtment/data for back compatibility (e.g. IE 5.0) ?
• Build a test case for every requirement / use case items. Mention the requirement number in the test case document for traceability.
• Determine which tests to be perform within limited time e.g. installation/configuration/uninstall test, user functional test, performance test, security test, compatibility test.
Don’t try to cover everything. Prioritize the test cases base on the most likely error (e.g. which functional area, which class) and the risk.
Avoid overlap of test cases
• Use test case with hand-convenient values (e.g. 10000 instead of 47921)
• Make sure that the testers have business knowledge about the domain (e.g. terminologies, business logics, workflow, typical inputs)
• Consider automatic test case generator
 Review and test the test code
• GUI prototyping/pilot test: involve only limited numbers of testers & use easier scenarios
• Consider positive (e.g. good data) as well as negative (e.g. wrong data, database connection failure) test cases
• Use test framework/tools (avoid reinventing the wheel) e.g. SOAPUI, Selenium, JMeter.
• Keep, interpret and report the test statistics, useful charts:
   o defect gap analysis: found bugs and solved bugs vs time
   o number of bugs per function/module/area (bugs tend to be concentrated in certain modules)
   o number of bugs per severity level (e.g. critical, major, minor)
   o number of bugs per status (e.g. ok, solved, unsolved, not yet tested)
   o test burnout graph: number of unsolved bugs and not yet being ran test-cases vs time
   o number of bugs per root causes (e.g. incomplete requirement, database data/structure, etc).


Test checklists

Please see http://soa-java.blogspot.nl/2012/09/test-checklists.html


The test pyramid

• Level 1: automatic unit tests
• Level 2: service integration tests (e.g. the connection between services)
• Level 3: user acceptance / system tests (e.g. GUI, security, performance)


Tools

• Unit tests: junit, nunit
• Service functional tests e.g. SOAPUI
• Performance tests e.g. SOAP UI, Jmeter, yslow (GUI)
• Security tests e.g. SOAPUI, paros, spike, wireshark
• GUI tests e.g. Selenium, httpunit




Please share your comment.

Source: Steve's blog http://soa-java.blogspot.com


References:

• Software Testing and Continuous Quality Improvement by Lewis



• Code complete by McConnell


Friday, September 7, 2012

Sending attachment: MTOM / XOP vs SWA and inline attachment

So you want to send a large binary/files via web services. This blog will describe why MTOM (Message Transmission Optimization Mechanism) / XOP (XML-binary Optimized Packaging) is better than inline-attachment and SWA.

Inline base64 attachment
The binary content as an inline element inside the SOAP message:
<Envelope>
 <Body>
  <sendImage>
   <filename>mybeautifulwife.jpg</filename>
   
  </sendImage>
 </Body>
</Envelope> 
We need to convert the binary to base64 since we use XML. The problem with this method is that the base64 representation will expand the size of the message, so it's not an effecient method.

SOA with attachment (SWA)
Using MIME message (originally for SMTP email protocol), the SOAP message becomes the first MIME content and the attachments are in the following MIME contents:
type=text/xml;
start="<rootpathID>"
Content-Length: ...
--_MIME_boundary_
Content-Type: text/xml; charset=UTF-8
Content-Transfer-Encoding: 8bit
Content-ID: <rootpathID >
<soapenv:Envelope>
 <soapenv:Body>
  <sendImage>
    < filename>mooiemarjo.jpg</filename>
    <image href="cid:imgID"/>
  </sendImage>
 </soapenv:Body>
</soapenv:Envelope>
--_MIME_boundary_
Content-Type: image/jpeg
Content-Transfer-Encoding: binary
Content-ID: <imgID>
...JPEG image bytes...
--_MIME_boundary_--
The problems with SWA:
• it's break the SOAP web service model: the SOAP message being XML
interoperability problems (e.g. different implementations of message-level security)



MTOM/XOP standard as a better solution:
• Similar to SWA approach: using MIME messages
• interoperability: the MIME attachment contents logically become inline contents within XML document so it's easy to handle these contents with standard approach (e.g. XSL transformation, standard security treatment using WS-Security for encryption/signature, WS-RM for QoS)
interoperability with different web service clients thanks to MTOM policy declaration in the wsdl which is understood by different vendors (I have tested in .Net, Java.)
optimized (e.g. compressed in Weblogic framework)
Content-Type: Multipart/Related;
boundary=_MIME_boundary_;
type= application/xop+xml;
start="<rootpathID>"
Content-Length: ...
--_MIME_boundary_
Content-Type: text/xml; charset=UTF-8
Content-Transfer-Encoding: 8bit
Content-ID: <rootpathID >
<soapenv:Envelope>
 <soapenv:Body>
  <sendImage>
    < filename>mooiemarjo.jpg</filename>
    
  </sendImage>
 </soapenv:Body>
</soapenv:Envelope>
--_MIME_boundary_
Content-Type: image/jpeg
Content-Transfer-Encoding: binary
Content-ID: <imgID>
...JPEG image bytes...
--_MIME_boundary_--
Note that the MIME content type is application/xop+xml and we use to include the attachment content.




How: server side (JAX-WS in Weblogic)
use @MTOM annotation or mtom.xml policy

How: client side (JAX-WS in Weblogic)

Pass MTOMFeature() as argument:
MtomService port = service.getMailServicePort(new MTOMFeature());

Using SOAPUI as a test client


MTOM attachment via SOAPUI, 3 steps:
1. Set Enable MTOM = true in the request properties
2. Upload the attachment (e.g.. A3.pdf), notice the contentID
3. Set the MTOM contentID in the xml request



Improving your MTOM service:
• Security: validation & sanitation (against code injection), limit the attachment/body size (againts DOS attack), content scan (against virus)
• adding reliability with guaranteed delivery queue as the front end of the proxy

If you're an Oracle OSB fans
In OSB you can use also email-transport, but using a Java webservice with JavamailAPI offers more configuration flexibility (e.g. unicode support, custom content handlers, etc) and better performance (throughput). A good book about how to use email-transport in OSB: OSB Development Cookbook by Schmutz et.al.

Javamail tips
Html & Unicode support in the Javamail API (Java library to send mails to the SMTP servers):
messageBodyPart.setContent(thecontent, "text/plain; charset= UTF-8");

Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)


References:

Developing Web Services with Apache CXF and Axis2 by Tong
MTOM using open source Java (Axis2 & CXF): using document-wrap or RPC style and DataHandler.


Programming Advanced Features of JAX-WS Web Services for Oracle WebLogic Server
http://docs.oracle.com/cd/E17904_01/web.1111/e13734/mtom.htm#i281179

MTOM using JAX-WS (with example code)
http://www.mkyong.com/webservices/jax-ws/jax-ws-attachment-with-mtom/

Wednesday, September 5, 2012

The Review Process

As a continuation of my blog about Software Review http://soa-java.blogspot.nl/2012/04/software-review.html



Review Process checklist
• Determine when to do the review in the software process e.g.
    * code review after finishing the code implementation before delivering to QA/test team (waterfall)
    * requirement/product backlog review before sprint planning (Scrum)
    * design review before starting coding (well... even with Agile process you need to start with some kind of design in your mind that you can discuss with your peer reviewers).
• Define the process/how review will be conducted e.g.
    * code reading (most effective)
    * formal review meeting (less effective)
    * informal walk-through (less effective)
    * customer demo (mainly for functional requirements).
• Determine entry criteria (e.g. specification documents are available)
• Determine exit criteria (e.g. approval by product owner & SOA governance board)
• Determine metrics (e.g. LOC/hour, time spend, error list with severity & type)
• Are tools available to assist review process (e.g. checkstyle, PMD, spelling checker, xml/html validator, test suites)?
• Determine communication channel (e.g. Trac wiki, bugzilla)
• Determine who will play the reviewer role, e.g. architects, security specialist, external auditor, customer. You may have several reviewers assigned to specific areas (e.g. security specialist, database specialist, customer to review use cases).
• Determine the time needed for review (based on code complexity/size/maturity, programmer's skills, risk analysis). Discuss the time/plan with project manager / team lead to obtain management support. Schedule the meetings. Set time-limits for meetings & other review works.
• Do the review, register the anomalies.
• Discuss whether or not a fix is needed.
• Discuss the fix, decide which version the fix should be done, who will do the reworks, estimate/plan the reworks
• Determine the exit decision e.g. re-inspection after required reworks, minor reworks with no further verification needed.
• Reschedule the follow up/ reinspection for reworks.
• Collect "lessons to learn" to improve the development & review process, for example in a company wiki knowledge repository.

Review Outputs
List of anomalies with severity/risk and types (e.g. missing functional requirement, doesn't conform standard/guidelines, security, performance, etc.). The list is documented for example in Trac/Bugzilla and made available to the developer team, QA team, product owner and management.
• List of actions for each anomalies (e.g. don't fix, fix for further release, fix immediately in the current Scrum sprint), who will implement, when the follow up is

Best practices
• use checklist, rather shorter than A4
• The process can be enforced by software infrastructure (eg deployment script, TracWorkflowAdmin plugin)
• the reviewer must be someone other than the author of the requirement / design / code.
limit the review time, make a clear agreeement / planning between developers, reviewes, project manager and customer. Waiting for a review shouldn't become an excuse to block the progress of the project.


Other tips
Read this article:
http://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
* don't review too much code at once
* take your time
* the quality of the code is improved if the author has done self-review and annotate (explain, defend the rationale) it before the review
* verify that the defects are actually fixed
* ego effect: the review process will motivate the author to be less sloppy even if you only review small percentages of the code
* use automated review tools (e.g. checkstyle in java eclipse)