Saturday, December 31, 2011

XQuery vs XSLT comparison: which to use?


The advantages of XSLT:
* XSLT is in xml format, thus XSLT files can be parsed, validated, dynamically created (e.g. using templates) using xml / soa tools.
* Pull/program-driven approach: XSLT works well to query high structured / predictable documents (e.g. a WSDL-defined SOAP message)
* The template is the strong point of XSLT, although it's possible to simulate this with a user-defined xquery function using tree transversal.
* With xsl:import you can override templates, thus improving reusability (analogous to inheritance & polymorphism in OO languages.)

The advantages of Xquery:
* Push/content-driven approach: Xquery is easier than XSLT to deal with loose structure / less predictable documents (e.g. html) where the stylesheets have to react dynamically to the content of the child elements.
* Xquery is less verbose and less cumbersome compared with XSLT, thus it's easier to learn.
* Xquery applies type strictness using the datatype definitions in the schemas.

Other factors to decide is the supports in the tools you used, e.g. Oracle Soa suite has better xslt editor, no xquery editor. On the other hand, the Oracle OSB has better xquery support than xslt. In general XSLT is better adopted in the SOA tools than Xquery, especially the old tools.

My experience: in my job I need to learn them both, when I started to use xml transformation in my job (about 2006) xquery was not exist, so xslt was the only option. Nowadays people in my office use xquery instead of xslt since they use oracle osb more, which has better xquery support, so I need to learn to adopt xquery more.

Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




References:

XSLT: Axis and Predicate power!

This blog shows you about how to take advantage of axis and predicate in xpath expression: axis::test[predicate].
For example you have this XML:

<Projects>
<Project>
<ProjectName> Teach my toddler computer </ProjectName>

<ProjectActivities>
<ProjectActivity>
<ActivityName> Install Qimo Linux </ActivityName>
</ProjectActivity>
<ProjectActivity>
<ActivityName> Teach mouse game for mouse training </ActivityName >
</ProjectActivity>
</ProjectActivities>

<Elements>
<ProjectElement>
<ElementName>mouse skills</ElementName>
</ProjectElement>
<ProjectElement>
<ElementName>menu navigation</ElementName>
</ProjectElement>
</Elements>
</Project>

<Project>

<ProjectName> Make my wife happy</ProjectName>

<ProjectActivities>
<ProjectActivity>
<ActivityName> Buying flowers </ActivityName>
</ProjectActivity>
<ProjectActivity>
<ActivityName> Morning kiss </ActivityName >
</ProjectActivity>
</ProjectActivities>

<Elements>
<ProjectElement>
<ElementName>love</ElementName>
</ProjectElement>
</Elements>
</Project>

<Projects>

you want to transform this XML to this text:

Project: Teach my toddler computer
*Activities: Install Qimo Linux
**Elements: mouse skills, menu navigation
*Activities: Teach mouse game for mouse
**Elements: mouse skills, menu navigation
Project: Make my wife happy
*Activities: Buying flowers
**Elements: love
*Activities: Morning kiss
**Elements: love

using this xslt:


<xsl:for-each select="//Projects/Project">
<xsl:variable name="nuproj" select="ProjectName"/>
Project:<xsl:value-of select="ActivityName"/>,
<xsl:for-each select="ProjectActivities/ProjectActivity">
*Activities:<xsl:value-of select="ActivityName"/>,
<xsl:text></xsl:text>
<xsl:for-each select="following::Elements[parent::Project/ProjectName=$nuproj]/ProjectElement">
**Elements:<xsl:value-of select="ElementName"/> ,
</xsl:for-each>
<xsl:text></xsl:text>
</xsl:for-each>
<xsl:text></xsl:text>
</xsl:for-each>



In this xsl we iterate over ProjectActivity within each Project. So during this iteration the current context is in a ProjectActivity, while you want also to iterate over each Elements/ProjectElement. We solve this using xpath expression in the form of axis::test[predicate] :

following::Elements[parent::Project/ProjectName=$nuproj]/ProjectElement

so the "following" is the axis which tells that we select the Elements/ProjectElement located following the current context ProjectActivity. There many as other axis expression such as preceding, parents, descendants, etc which specifies the location relative to the current context.

[parent::Project/ProjectName=$nuproj] is the predicate, which specifies the condition of which Elements node to be selected (since there are more than one Elements nodes located following the current context ProjectActivity. In this case we specify that the Elements node to be selected should have a parent Project which has Project/ProjectName node with value equal to the variable $nuproj, i.e. the Elements node that belongs to the same Project with the current context ProjectActivity.

Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




References:


Jesper Tverskov's axis tutorial
XSLT 2.0 and XPath 2.0 Programmer's Reference

Testing for Empty Elements in XSLT & Xquery


While processing an XML with XSLT sometimes you need to access a node but you want to make sure that that node exists otherwise the XSLT processor will complain (analogous to the infamous null point exception in Java).

For example using this XML:

<Projects>
<Project>
<ProjectName> Teach my toddler computer </ProjectName>
<ProjectActivities>
<ProjectActivity>
<ActivityName> Install Qimo Linux </ActivityName>
</ProjectActivity>
<ProjectActivity>
<ActivityName> Teach mouse game for mouse training </ActivityName >
</ProjectActivity>
</ProjectActivities>
</Project>

<Project>
<ProjectName> Teach my kids piano </ProjectName>
<ProjectActivities/>
</Project>

<Projects>

Suppose you want to iterate over ProjectActivity within project, but some project has no ProjectActivity (such as the "Teach my kids piano" project above).

XSLT solution
* To test if the element exists in XSLT you can use: xsl:if test="ProjectActivities/ProjectActivity".
*
To test if the element exists and non-empty: xsl:if test="string(ProjectActivities/ProjectActivity)".
*To test that the element exists and has text or any element content (e.g. subnodes or attributes): xsl:if test=" ProjectActivities/ProjectActivity/text() or ProjectActivities/ProjectActivity/*"

So for example using this test in this XSL:

<xsl:for-each select="//Projects/Project">
<xsl:variable name="nuproj" select="ProjectName"/>
Project:<xsl:value-of select="ActivityName"/>
<xsl:if test="ProjectActivities/ProjectActivity">
<xsl:for-each select="ProjectActivities/ProjectActivity">
To do:<xsl:value-of select="ActivityName"/>
<xsl:text></xsl:text>
</xsl:for-each>
</xsl:if>
<xsl:text></xsl:text>
</xsl:for-each>

you will expect this result:

Project: Teach my toddler computer
To do: Install Qimo Linux
Project: Teach my kids piano

Xquery solution:
* To test if the element exists :exists($ProjectVariable/ProjectActivities/ProjectActivity)
or using similar strategy used by xslt above:
if ($ProjectVariable/ProjectActivities/ProjectActivity) then ... else ...

*To test if the string non-empty:string-length($ProjectVariable/ProjectActivities/ProjectActivity) != 0)
or using similar strategy used by xslt above:
if (string($ProjectVariable/ProjectActivities/ProjectActivity)!="") then ... else ...

Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




References:
XSLT Empty Element tutorial

Beginning XSLT and XPath: by Ian Williams

Tuesday, December 20, 2011

Maven, Artifactory and Hudson for Oracle OSB Continuous Integration

In the previous blog we discussed about using Ant and Hudson/Jenkins for continuous integration. What hasn't been explicitly discussed is about dependency management, that Maven can handsomely handle.

The benefits of using Maven instead of Ant:
1. standardization following best practices (e.g. directory structure) that leads to shorter/simpler configuration file (pom.xml), less maintenance, and higher reusability
2. transitive dependency management: Maven will find and solve the conflicts of the libraries needed. Perhaps you know this concept already if you've used ivy framework with Ant, but this concept is central in Maven so that lots of innovations has been implemented regarding this feature (e.g. enterprise repositories).
For example I just made adjustment and commited StudentRegistrationService-ver2.0 which depends on LDAPService-ver2.0 and hibernate-ver3.jar. When I deploy the StudentRegistrationService-ver2.0, Maven will include also the LDAPService-ver2.0 and hibernate-ver3.jar from a enterprise repository that stores all the libraries used in your company. If the build & test processes success, the artifact of my new StudentRegistrationService-ver2.0 will be included in the repository, so other services which consume my service will be able to use this version 2.0 of my service. Strong enough, you can specify the version dependencies using ranges (e.g. min version, max version), so I can specify that my service depends on LDAPService max version 2.0 (since I don't support the new interface of the newer LDAPService yet) and also depends on PaymentService min version 3.1.1 since there is a payment bug in the PaymentService version lower than 3.1.1. Here is an example of defining these dependencies in the pom.xml of the StudentRegistrationService:

<dependency>
<groupId>TUD</groupId>
<artifactId>LDAPService</artifactId>
<version>[0,2.0]</version>
</dependency>

<dependency>
<groupId>TUD</groupId>
<artifactId>PaymentService</artifactId>
<version>[3.1.1,)</version>
</dependency>

An illustration about how it works:


1. Using Hudson/Jenkins to let the svn commit trigger the Maven build
Please see the previous blog about how to install and setup Hudson/Jenkins.

For this example, I specify Hudson/Jenkins to pool the svn server every minute (set by the schedule "* * * * *" using cron format). When there is a new commit in the mysvnproject, the Maven "install" goal (along with its previous lifecycles phases i.e. compile, test) will be invoked.


2a. Checkout using mvn-scm plugin

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-scm-plugin</artifactId>
<version>1.1</version>
<configuration>
<username>username</username>
<password>password</password>
</configuration>
<execution>
<id>checkout</id>
<configuration>
<connectionUrl>mysvnserver</connectionUrl>
<checkoutDirectory>mysvndir</checkoutDirectory>
<excludes>folder2exclude/*</excludes>
</configuration>
<phase>compile</phase>
<goals>
<goal>checkout</goal>
</goals>
</execution>
</plugin>

2b. Build the OSB, you can use the same ant task as in my previous blog, wrapped with maven-antrun-plugin.

3a. Obtain the dependencies from the repositories, using dependency:copy-dependencies or dependency:copy.

3b. Deploy the OSB project and its dependencies, you can use the same ant task as in my previous blog, wrapped with maven-antrun-plugin.

4. Run SOAP UI web service test using maven-soapui-plugin (or alternatively you can use testrunner.bat / testrunner.sh similar to my other blog)

<plugin>
<groupId>eviware</groupId>
<artifactId>maven-soapui-plugin</artifactId>
<version>3.0</version>
<executions>
<execution>
<phase>test</phase>
<id>soapuitest</id>
<configuration>
<projectFile>${mysoapuitestfile}</projectFile>
<outputFolder>${testreportdir} </outputFolder>
<junitReport>true</junitReport>
<exportwAll>true</exportwAll>
<printReport>true</printReport>
<settingsFile>${soapuisettingfile}</settingsFile>
</configuration>
<goals>
<goal>test</goal>
</goals>
</execution>
</executions>
</plugin>

5. Archieving the artifact using an enterprise repository.
The benefit of using enterprise repository:
• your developers don't have to search, download and install the libs manually
• it's faster & more reliable than downloading the libs from internet, the concept is similar to proxy server that cache the internet.
• it will store the artifacts of your company projects from ant/maven builds, so they will be readily available for testing and shipping
• web administration interface, search, backup, import/exports

I chose Artifactory as enterprise repository since it has more features than other products, such as: xpath search inside XML/POM, hudson integration (e.g. for build promotion), conn to ldap, cloud (saas) possibility, easy install (running in an embedded jetty server or as service in Windows/Linux.)
You can use Hudson artifactory plugin to integrate Artifactory to Hudson/Jenkins process.

I use 3 local repositories inside your Artifactory for different library categories:
open source/ibibliolibraries (e.g. apache common jars), the Artifactory can download these automatically
proprietary libraries (e.g. oracle jdbc jar), you need to install these manually (e.g. via Artifactory web interface)
company libraries, you need to install these manually or via Hudson build as done in this example. For the company repository, I define such that the repository cab handle both the release/stable version (e.g. the PaymentService-ver3.1.1 which is already well tested and approved) as well as the snapshot version (e.g. I am not finished with my StudentRegistrationService-2.0 yet but I want to make it available for other projects which depend on it). For example in the artifactory.config.xml:

<localRepository>
<key>tud-repo</key>
<description>mycompany-libs</description>
<handleReleases>true</handleReleases>
<handleSnapshots>true</handleSnapshots>
</localRepository>

<localRepository>
<key>ibiblio-repo</key>
<description>stable-opensource-libs</description>
<handleReleases>true</handleReleases>
<handleSnapshots>false</handleSnapshots>
</localRepository>

You need to declare these repositories in your pom.xml (or with similar approach in settings.xml for all of your projects):

<repositories>
<repository>
<id>ibiblio-repo</id>
<url>http://myreposerver:port/artifactory/repo</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>tud-repo</id>
<url>http://myreposerver:port/artifactory/repo</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>ibiblio-repo</id>
<url>http://myreposerver:port/artifactory/repo</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>tud-repo</id>
<url>http://myreposerver:port/artifactory/repo</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories&gt

For the sake of clarity there are some details omitted from this blog. The concepts in this blog work also for non OSB projects (e.g. Java/J2EE applications).

Any comments are welcome :)



See also: http://soa-java.blogspot.nl/2011/03/soa-continuous-integration-test.html


References:
Setting Up a Maven Repository
Comparison Maven repository: Archiva, Artifactory, Nexus
Amis blog: Soapui test with maven


Continuous integration for Oracle OSB projects using Ant and Hudson / Jenkins

Continuous integration (CI) is pervasive. This doesn't come from nothing, CI has many benefits, such as to avoid last minute integration hell and to improve software quality


The principles of Continuous integration
1. every commit to the SCM should be build so that you'll have an early feedback if the build breaks
2. automate build for consistency
3. automate deployment for consistency
4. automate test of the build artifact for consistency
5. archive the build artifact so that it'll be readily available (e.g. for further test)
6. keep the reports & metrics from build & test

You can also add additional steps for example when this commit breaks the test you can reject the commit by rolling back the deployment (using undeploy wlst task) & roll back the svn commit using merge backward:
svn merge -r currentver:previousver ; svn commit

This blog will show how to achieve these steps using Ant and Hudson/Jenkins. We use Ant since in many organizations Ant is already well adopted compared with Maven. In another blog we will discuss how to achieve the same goal using Maven and artifact repository (e.g. Artifactory or Nexus) which are handier than Ant particularly with respect to dependency management.

How it works:
1.Using Hudson/Jenkins to let the svn-commit triggers the Ant build
I chose Hudson/Jenkins since it's easy to use, has good features, scalable and recommended by many people (including folks working at Oracle). You can see Jenkins as a new version of Hudson, Jenkins was created to avoid legal problem with Oracle when Kohsuke Kawaguchi, the Hudson's creator, left Sun/Oracle.
Installing Hudson/Jenkins is easy, in Windows it can be run as a Window service. Hudson/Jenkins contains an embedeed Winstone servlet engine, so you can also run it using
java -jar hudson.war --httpPort=aportnumber
To install Hudson/Jenkins in Weblogic you need to add the deployment descriptor weblogic.xml to solve classpath conflicts of certain jars (depend on which version you install), also Hudson/Jenkins will not work in Weblogic servers with SOA/OSB extensions due to some conflicting settings.
You can add some plugins for example locale (to set languages), svn related plugins, trac (to recognize trac tag e.g. fixed#), cvs/svn browser viewer, html report, scp artifact repository, promote builds, among others.

You need to set some configurations such as JDK location, svn connection, Ant location, Maven location, Junit test report location, artifacts location, SMTP/email server for notifications.

For this example, I specify Hudson/Jenkins to pool the svn server every minute (set by the schedule "* * * * *" using cron format). When there is a new commit in the mysvnproject, the Ant ("main" target) will be invoked.

2. Checkout the new commited svn project using svntask:

<target name="checkout">
<delete failonerror="false" includeemptydirs="true" quiet="true"
dir="${servicename}" />
<svn username="${subversion.user}" password="${subversion.password}">
<checkout url="${subversion.path}" destPath="${servicename}" />
</svn>
</target>

Build this code to an OSB project using ConfigExport

<target name="makeosbjar" depends="deletemetadata">

<!-- osb jar compile -->
<java dir="${osb.eclipse.home}"
jar="${osb.eclipse.home}/plugins/org.eclipse.equinox.launcher_1.0.201.R35x_v20090715.jar"
fork="true"
failonerror="true"
maxmemory="768m">
<jvmarg line="-XX:MaxPermSize=256m" />
<arg line="-application com.bea.alsb.core.ConfigExport" />
<arg line="-data ${workdir}" />
<arg line="-configProject ${osb.config.project}" />
<arg line="-configJar ${jardir}/${packagename}" />
<arg line="-configSubProjects ${servicename}" />
<sysproperty key="weblogic.home" value="${osb.weblogic.home}" />
<sysproperty key="osb.home" value="${osb.home}" />
</java>
</target>

3. Deploy the jar to the OSB server using WLST import (a python script)

<target name="deployOSB">
<wlst fileName="${import.script}" debug="true" failOnError="false"
arguments="${wls.username} ${wls.password} ${wls.server} ${servicename} ${jardir}/${packagename} ${import.customFile}">
<script> <!-- run these before import.py -->
adminUser=sys.argv[1]
adminPassword=sys.argv[2]
adminUrl=sys.argv[3]
passphrase = "osb"
project=sys.argv[4]
importJar=sys.argv[5]
customFile=sys.argv[6]
connect(adminUser,adminPassword,adminUrl)
domainRuntime()
</script>
</wlst>
</target>

4. Run the SOAPUI web service test and generate the junit test report:
<target name="soapui-test">
<exec executable="cmd.exe" osfamily="windows" failonerror="false">
<arg line="/c ${testrunner.bat} -j -freports ${soapui.test}"/>
</exec>

<junitreport todir="${testreportdir}">
<fileset dir="reports">
<include name="TEST-*.xml"/>
</fileset>
<report format="frames" todir="${testreportdir}/html"/>
</junitreport>
</target>


Example of Hudson/Jenkins output:

Hudson also sent email notifications:

For the sake of clarity, there are some details omitted from this blog (e.g. the ant classpath for lib needed: svnant, svnjavahl, svnclientadapter, xmltask), these details will be found in the build.xml. Please download the build.xml, build.properties and the import.py wlst here.

The concepts in this blog work also for non OSB projects (e.g. Java/J2EE applications).

Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




The file download is made possible by OpenDrive


References:

Using the Oracle Service Bus Plug-ins for Workshop for WebLogic http://docs.oracle.com/cd/E13159_01/osb/docs10gr3/eclipsehelp/tasks.html
Using WLST http://docs.oracle.com/cd/E15051_01/wls/docs103/config_scripting/using_WLST.html
Biemond's blog http://biemond.blogspot.com/2010/07/osb-11g-ant-deployment-scripts.html
How-to-deploy-Hudson-to-weblogic http://jenkins.361315.n4.nabble.com/How-to-deploy-Hudson-to-weblogic-td3246817.html

VMware virtual network configuration: NAT port forwarding

Playing with virtualization is fun. You can entertain yourself as if you have an army of many computers, connected with one or more (virtual) networks.

There are 3 possible configurations of virtual networking in VMware:

1.Host-only: the virtual computers (VM) are connected each other in the virtual networks inside a host (myPC), which is connected to the internet via IP 123.456.789.012. They can access the internet data but it's impossible for a computer in the internet (e.g. Annie) to send request to these VM since the virtual networks are private (e.g. IP in the 198.168.X.X range).

2.Bridge: you're lucky. The network administrator in your company is your best friend, so he gave you several free slots of IP addresses. Thus the VM has its own IP address (e.g. 123.456.789.013). The VM acts just like a real PC, the computers in the internet can reach this VM.

3. NAT: Well, your organization has a limited IP addresses, but with NAT you can still have possibility to let other computers in the internet to access your VM. For example you have a webserver using port 333 in the VM 198.168.11.12, using Network Address Translation (NAT) you can use the port 444 in the myPC host 123.456.789.012 to expose this webserver to the internet.

The vmware network configuration can be set using vmware-config.pl in Linux or vmnetcfg.exe in Windows. Here is a screenshot example over how to configure the NAT / port forwarding from the host 123.456.789.012:444 to the VM webserver in 198.168.11.12:333.



Notes:
How to get vmnetcfg.exe for VMware Player Windows:
  1. Download the installer
  2. Extract the installer: VMware-player-installer.exe /e tempdir
  3. Extract the network.cab in the tempdir, it contains vmnetcfg.exe
If you update the VMware Player vmnetcfg.exe is not extracted by standard installation so the vmnetcfg.exe in your VMware directory  is still the old version that missmatch with the new VMware version. Thus you need to repeat this procedure to replace the old vmnetcfg.exe in your VMware directory.


Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




Note: you can achieve the same thing with iptables service in linux, but this vmware-config approach is easier.

References:

Monday, December 19, 2011

Android Simulator : simple up and running...


You don't have a smartphone yet but you're curious about how your website looks like in a smartphone?

Cost: free
Time needed: 5 minutes
Steps:
1. download and install the Android development SDK: http://developer.android.com/sdk/index.html
2. create an Android Virtual Device (AVD) profile: startmenu>manage ACD>create an AVD (e.g. Android 2.3.3 HVGA)
3. run the emulator: e.g. emulator -avd name-of-your-avd




Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)

Mobile web development


Why mobile webs are different
You need to adapt your web application since the usability of mobile usage is different compared to the desktop:
• smaller screen size (e.g. 320x480)
• vast variations of browsers & platforms (each with different limitations & compatibilities e.g. some doesn't support javascript yet)
• competing attentions / multitasking users (e.g. the user may use the mobile while waiting for a bus or talking to friends), this leads to more distractability and shorter session (3 min in average instead of 10 min average of desktop users).
• different input devices (e.g. multitouch & virtual keyboard via touch screen instead of desktop keyboard & mouse)

Design strategies
• define user's goals and how they can accomplish their goal in your web with minimum efforts (i.e. minimum clicks/inputs)
• prioritize the features of the desktop version of your web, implement only the top 20% in your mobile version

Usability tips
Simplicity
• implement only 20% of the features of your desktop web application
• minimize user efforts & user inputs, try to infer the context from history/cookies, geolocation, IP address
• no more than 3 clicks (or pages) depth
• limit the main navigation to 4 links, limit the total links in a page to 10
• minimize text, use short/simple words
• to the point, no welcome screen
• limit the bandwidth: simple image, don't use text-image. The bigger the bandwidth, the more users have to pay & the slower your service is.

Layout
• avoid horizontal scrolling
• avoid multi columns
• use all area, 100% width (don't use side menu, side advertisements etc)
• use fluid layout instead of fixed layout

General tips
• always provide a link to the desktop version
• the most used features at the top (e.g. login in a bank service, search in a library service)
• provide enough space (min 20px) for clickable elements and links, since finger touches need more space than using a mouse
• use background colours to separate sections
• use a list instead of a table

Beside these, many of the desktop UI usability rules are still apply in the mobile world, such as visual consistency, legible fonts, clear structure, consistent alignments, etc

Device awareness and content adaptation
A common approach to handle the variations of mobile browsers/platforms is
by grouping according to device/browser capabilities (the groups don't need to be mutually exclusive) for example based on screen resolution, portrait/landscape orientation support, javascript/ajax support, geolocation support, markup languages (old wml,xhtml-mp,html5).

Starting with a basic version of your web (e.g. a plain html without css & javascript), create a different version for each group (e.g. different screen resolution) using different css and technologies (javascript, geolocation, etc).

Your web application needs to aware about the capabilities of the client browser/device and then adapts the content according to which group this device falls to. So first we need to know which browser/device the client uses, using http request header information, for example an (rather old) iphone:

User-Agent: Mozilla/5.0 (iPhone; U; CPU iPhone OS 2_2_1 like Mac OS X;
en-us) AppleWebKit/525.18.1 (KHTML, like Gecko) Version/3.1.1
Mobile/5H11 Safari/525.20
Accept: text/xml,application/xml,application/xhtml+xml,


Then using device libraries (e.g. WURFL) to get the list of device/browser capabilities (screen size, javascript/ajax support, etc).

The weakness of this approach is that you need to maintain several versions of your code (e.g. iphone4.css, iphone3.css, android23.css, vintagemobile.css, ....). The mobile/device list is changing fast, you need to keep up with this (and hopefully also the maintainers of the device library you use). The information in the request header & the device libraries may not be accurate. That why it's recommended to limit the number of groups as minimum as possible.

An example of comparison of a desktop and mobile website
Amazon is one of the most progressive dot.com company, we can learn by observing their design. This is the desktop version:


This is the mobile version (Android simulator, 320x480):

The mobile version has
• much less features than the desktop version
• less text, less images
• list (for links)
• one column
• the most important features in the top of the pages (the shopping cart and the search.)

In the future I will discuss how to implement geolocation mobile web to determine in which building a student is located. Based on this, we provide contextual information to the student for example about the course room/schedule, computer labs availability, the locations of other students from the same course/study-year.

Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)




References:
Programming the Mobile Web


Beginning Smartphone Web Development


The UI snapshot examples are the courtesy of www.amazon.com

Friday, November 4, 2011

Java Callout in OSB

Sometime it's handier to implement your algorithm in Java than in OSB. This short tutorial shows you how to call a java class from OSB.

The Java code
Suppose you have implemented your algorithm in a Java class as follow:

package tud;

public class JavaHelper {
public static String occupy(String street){
//do occupy
return street+" is occupied.";
}
}

The method has to be static. Package this class to a jar (e.g. using Ant or Maven or "Eclipse export to jar" or "jar -cf jarname packagename". Deploy this jar in weblogic (go to /console then choose deployment>install).

The Java callout action in OSB
Create a Java callout action in message flow in your proxy:

* In the "method" field you specify the package.class.operation name of your Java jar.
* In the "action expression" field you specify the input parameter for your java class, e.g. I take the "Input" element from the body.
* In the "result value" field you specify the variable that will contain the result, e.g. the "occupy" in this case.

Test
using this input "Sesame street" you will get the occupy variable "Sesame street is occupied." due to the occupy java method.



In this article we use a primitive type (in this case string), you can also pass an XMLobject (e.g. using XMLbeans framework to create java class types from Schema) as described in Eric Elzinga's blog: http://www.xenta.nl/blog/2011/08/29/oracle-service-bus-java-callouts-with-xmlobjects/

Source: Steve's blog http://soa-java.blogspot.com

Wednesday, September 28, 2011

Agile estimation & planning

You might encounter several weird buzzwords when you're reading about agile estimation methods. This blog tries to give down to earth discussions to help understanding.

Planning Poker
A method to estimate work amounts, which inherits some similarities to the poker game.
The steps:
1. every team member gives an estimation for each user stories (e.g. 8 man-hours for the "customer registration GUI" story), but the number is hidden until all the members have given their estimations. You can use a real set of poker cards to make the process more fun, each team member chooses a card number then puts it upside down to hide the number.
2. when everybody has given his/her estimate, discuss the numbers (e.g. Jip believes that building the GUI will cost 6 points using Struts, Janneke gives 2 points instead using simple PHP, the team will discuss the pros/cons of each method/estimation)
3. repeat the process until the numbers converge (the team members estimates become closer & closer to each other)

Variations:
• online instead of face2face planning meeting, e.g. using http://planningpoker.com/ , but I think face2face will be better & more fun than an online one, but perhaps not always handy for distributed teams.
• publish the user stories and let people outside the team to estimate. e.g. All the developers in the Jip & Janneke team has no experiences with SOA. Annie, a colleague in another team, has extensive experiences with SOA, so Annie can help to estimate the SOA user story. Beware that as the estimation comes from outside the team, it might not reflect the velocity of the team, but at least it can be a starting point for further discussion/estimation.


Tabletop estimation
Another estimation methods by arranging the user stories on a tabletop.
The steps:
1. write all the user stories in small cards
2. compare and arrange the positions of the cards based on their magnitudes (e.g. the story "buying an ice cream " is simpler than "arrange a marriage" so we put "buying ice cream" on the left)
3. Discuss the orders and change if necessary.
4. assign the numbers for each of the stories
5. discuss the numbers.


Simple Release planning
The numbers of iterations = total story points / velocity
The numbers of remaining iterations = remaining story points / velocity
There are many ways to define the velocity, e.g. the average the N last velocity or pessimistic (the average the N worst velocity).

Slacks/buffers
A slack is extra time that you put in your schedule. It might help you to save your deadline when you're overrun with schedule, otherwise you can use the slack time for:
• refactoring / paying technical debt
• learning a new framework / research time
Buffers basically serve the same idea, you can use features buffers and/or schedule buffers.

Fibonacci numbers
When coosing an estimate number, Mike Cohn proposed to use Fibonacci numbers (i.e. each number is the sum of its previous two) e.g. 0,1,2,3,5,8,13, 21,34... instead of just a simpler integers (e.g. ...,4,5,6,7,8,...). The reason is that the bigger the order of magnitude the bigger also the uncertainty (i.e. the gap between the numbers). Personally I prefer to use just simple integers.


Incorporating risks to your planning
The risks will prolong the remaining number of iterations above:
The numbers of remaining iterations = ((remaining story points / velocity)/ risk multiplier)- total risk exposure.

So there are 2 risk factors that will prolong the project: the total risk exposure (due to the project-specific risks e.g.some developers may sick/leave) & the risk multiplier (due to general risks e.g. how stable the development process in this company):

The delay due due to the project-specific risks is quantified by:
The Total risk exposure = sum_i (probability_risk_i * cost_risk_i). For example we have 2 risks: Bumba the GUI engineer who is often sick (probability 0.4 to cost the project 0.5 iterations more), and the Circus server thats need an update this month (probability 0.9, will cost the project 1 iteration more). So the total risk exposure = 0.4*0.5 + 0.9*1 = 1.1 iteration.

The velocity reduction due to general risk is quantified by a risk multiplier table:

As we see in this table the general risk is expressed in probability, thus the resulting remaining iterations estimation is not a (deterministic) point but a (probability) bar. So in a risky process (e.g. the Circus devteam just moved to the SOA environment for the first time), their velocity will be affected by 2x slower in the 0.5 probability case and 4x slower for 0.9 probability case.





How's about your experiences with planning/estimation methods? Which works and which doesn't? Why? Please share in the comment.

Source: Steve's blog http://soa-java.blogspot.com

References:
Agile Estimating and Planning


The Art of Agile Development


Agile Software Requirements

Thursday, September 22, 2011

Performance testing


Don't guess, test it!

Why performance test:
• to know the performance (e.g. can I handle 100 users, how fast I can refresh the GUI)
• to know how well it scales (e.g. how if I got 10x more customers)
• for tuning the parameters, design / architecture decision, queries

Tips:
• instead of postponing the performance test until the development finish you can do the performance test early in the architecture design phase / spike solutions, to choose which framework to use e.g. JSP or GWT/Ajax, hibernate or ibatis. In the early phase of architecture definition, the cost of a wrong decision can be expensive/irreversible. Consider that if you wait until you finish with the product and then you discover that you have made a wrong choice in the early design then your developer need to spend time again to redo the works (e.g. redo the JSP to GWT/ajax).
• With agile/test driven development spirit, even you may incorporate the performance test to your continuous integration / regression test.
• environment specific: the test result in your development PC will be different with the result in the production server, so use test environment as close as possible mimicking the production
• use different hardware to measure the statistics than the hardware which is being tested
• remove data which damage the quality of statistics (e.g. the initial burn-in period)
• use random think time

Steps:
1. preliminary test: explore (e.g. range #users), set test environtment params (e.g. jvm heapsize)
2. baseline test: typical usage for base comparison
3. stress test to the limit: increase the load until breaking, to know the limit, the bottleneck, the behavior in degradation mode
4. endurance test: load test (high load but not yet overwhelm) for long hours, to detect memory leak, resources/connections which are not properly closed, to know the fail behaviour (e.g. error handling)
5. stress test to fail: deprive the resources (e.g. overwhelm the cpu with other heavy tasks, turn off the networks), to know the fail behaviour (e.g. error handling)
6. vary the parameters (e.g. java heapsize, #clusters) / architecture (e.g. load balancer) / sql query / business logic, then repeat the baseline and stress test to limit fase, compare the results if it improves.

monitoring (to know where's the bottleneck):
• application: eclipse tptp profiler
• jvm: java profiler
• database: oracle trace, query optimizer
• os: top, vmstat, perfmon
• networks: packet sniffers (e.g. tcpdump), netstat, netprotocol analyser (e.g. ethereal)

How
1. define the metrics
2. simulate usage. Run a testrun, which run testscripts (which consist of requests/queries to simulate usage profile)
3. define sampling methods (e.g. fixed #cycles or fixed time window)

Performance metric
• use a clear/specific performance criteria, e.g. not "the web should be as fast as possible" but "the user get a confirmation after click the submit button with max response time 10 sec (given 100 simultaneous users activities)"
• for GUI, the metric can be max response time given #users
• for back-end (e.g. web services or database), the metric can be #transaction per sec (TPS)
• you can define also over limit behaviour e.g. 10% degradation response time with 1000 users.

The statistics:
a. Average response time (ART): aritmetic mean for all user
caveat: if the ART is within limit it doesn't mean that all user instances are within limit.
suggestion: draw the ART vs time (test cycles)
b. Average ART (AART): normalize ART with number of requests, so it will be comparable between different testscripts.
c. define the throughput transaction per sec (TPS): e.g. #(group)queries/sec in database or #messages/sec in JMS
d.Quality: standard deviation / average, prefer bellow 0.25.
e.g. compute standard deviation of ART(as a function of cycles) and divide by ART average over #cycles.


Tools, examples:
test: soapui, httpperf,junitperf,grinder,jmeter


Please write your comments about your experiences with performance testing.



Source: Steve's blog http://soa-java.blogspot.com

References:

My blog which mentioned about the use of performance test in the countinous integration test:
http://soa-java.blogspot.com/2011/03/soa-continuous-integration-test.html

J2EE Performance Testing by Zadrozny et.al.


Grig Gheorghiu's blog, a good place for information about testing, python and linux tips.
http://agiletesting.blogspot.com/2005/02/performance-vs-load-vs-stress-testing.html

Performance Engineering
http://performance.punebids.com/

Why planning based on features instead of activities


The conventional way when you make a Gantt charts for planning is by typing the list of activities (which are derived from your statement of work/SOW) in your Work Breakdown Structure (WBS). However, in Scrum/agile, we are recommended to setup our plan based on the software features (e.g. user stories, requirements) instead of a list of task activities.

The reasons, some problems may arise by focusing in the activities instead of features during planning:
So you focus on your development activities (which may have less values to customer) instead of customer values.

Your focus is on finishing activities instead of delivering features which the users need. When the project is behind schedule, people tend to crash by reducing the works to do. If you focus on the activities, you will reduce the activities which are prioritized for the convenience of the development team. In this process you may dropping features and some of the dropped may have greater values than those are delivered.

When you check the completeness of the works, you will check the completeness of the activities instead of the requirements of the product (which are more relevant to the users.)

What are you experiences with this issue? Please write your comments.







Source: Steve's blog http://soa-java.blogspot.com

References:
Agile Estimating and Planning

Choosing open source / free Scrum tools

I would like to have a free Scrum tool which have these features for planning & progress reporting:
• virtual task board (to show which user stories in the current sprint / sprint plan, release plan)
• burndown chart
• backlog (prioritized list of user stories)
• velocity chart
• Gantt chart / iteration timeline

I found at least 2 tools that fulfill those criteria:
• Icescrum http://www.icescrum.org/
• Agilo http://www.agile42.com/cms/pages/agilo/
Agilo has a better documentation than Icescrum and also offers integration with trac & svn, but you need to pay for Pro license for using the virtual board (& many other handy functions), about 800 euros for 10 users/year. So finally I decide to use the Icescrum instead. Icescrum is just a web application (a war file) that you can install for example in Tomcat. You can testdrive the icescrum here: http://www.icescrum.org/demo/

Some screenshots from http://www.icescrum.org/:

Burndown chart & activities


Virtual task board: release plan /sprint plan


Which agile/scrum tools/project management tools do you use? What are the advantages/disadvantages of those tools? Please write your comments.







Source: Steve's blog http://soa-java.blogspot.com

References:

Comparing Open Source Agile Project Management Tools by Brad Swanson
http://olex.openlogic.com/wazi/2009/comparing-open-source-agile-project-management-tools/

Scrum and XP from the Trenches by Henrik Kniberg
www.crisp.se/henrik.kniberg/ScrumAndXpFromTheTrenches.pdf

Agile Estimating and Planning

Monday, September 12, 2011

Design document: to write or not to write


One of the tenets from agile manifesto says "Working software over comprehensive documentation". Unfortunately many lazy (agile) developers use this manifesto to excuse that "we don't have to write documentations at all". Some developers claim that they are agile when they follow the no-documentation principle without practicing other agile principles (such as early & frequently deliveries.)

I believe that writing simple documents (such as use-case, requirements list and architecture design) is still beneficial yet in the agile context.

There are several benefits of a written design document (whether as up-front design or during the iterations):
• you can distribute it to many people (also across the continent) to get feedbacks, so you don't have to explain again & again orally.
To get feedbacks in the early phase while you defining architecture is important. Despite using agile process, you can't denied that the cost of changing your initial architecture decisions will get more and more expensive as you proceed in the development phase. The cost of repairing a wrong decision in the early phase (architecture) is expensive. You can get better feedback by writing the requirements & the architecture clearly in documents.
• if you don't write you can easily overlook/forgetting important details (beware: the devil is in the details), other people can review the list in your documents and add/comment.
• You can postpone some risky decisions and delay the design which most likely to change, but at least you need to mention the indecisive issues, the risks and the assumptions in your design documents, it will serve like a checklist that you/others can refer later to discuss.
• If you leave the company/project or if you delegate the project to others, the next person doesn't have to start from scratch. It will be easier also for the next people who have to extend the finished product, so they understand the reasons behind you decisions.

Please read my blog about "Software architecture design document"
http://soa-java.blogspot.com/2011/06/software-architecture-design-document.html

How about your experiences with trade-off between the rigorous (RUP) documentations & agile simplicity in your organization? Please write your comments.





Source: Steve's blog http://soa-java.blogspot.com

Reference:
Agile Manifesto

How to define your team: fixed teams or project based?


We consider 2 ways to form developer/tester teams:
1. Use a fixed team that move together from one project to the next.
2. Form a new team from a developer pool each time a new project come.

In my opinion using the first approach (a fixed team) is preferable than the second (project based team).

A fixed team will be more productive since its members have learned to trust & anticipate each other (regarding competences, work styles, personalities). Besides, it's easier to measure the velocity (thus improving your estimation & planning) with a fixed team. On the other hand, you need to give time for the team to "gel", once the team spirit is formed the productivity will increase.

A scrum team facilitates itself for this fixed-team approach. The daily scrum meetings facilitate the face2face communications between team members thus strengthen the "glue" inside the team, good for bashful geeks who otherwise will hide the whole day behind his PC.

Strangely enough, my observation of the drawback of a project-based team came from outside the professional works. I can share an experience in our church music team. As one of the biggest church in the Netherland, we have lots of talented musicians in the pool. Every week we will draw a set of band members from the pool. We spend the first hours of weekly exercise struggle to blend as a new team, so less time left to build the musical quality nor to worship as a team.

As conclusion, teams are harder to build than projects so it's better to form a persistent team that move together from one project to the next.

How's about your experience in your organization (work, church/community project etc)? Please share your comments.



Source: Steve's blog http://soa-java.blogspot.com

Reference:
The Clean Coder by Robert Martin



Disclamer: the 'bash-ful' term in this article has nothing to do with the bash shell in Linux.

Using openSSH

Ssh/scp is more secure than telnet/rsh/rcp due to encryption and server verification through certificates. In this blog we will discuss 3 issues: how to verify that you connect to the genuine server, how to create new keys in case that your keys have been compromised and a handy method to do ssh/scp without password.

1. How to verify a server connection
The first time you tried to connect to a server using ssh, you will be asked to verify the public key of the server:
> ssh auser@aserver
The authenticity of host 'auser(ip address)' can't be established.
RSA key fingerprint is bla:16:ee:ec:0b:19:5e:0b:33:c7:9f:ef:bla:bla:bla
Are you sure you want to continue connecting (yes/no)?

Once you say yes, the public key will be saved in your ~/.ssh/known_hosts file. Bear in mind of the man in the middle attack, how can you be sure that you communicate with the genuine server? A way to check by comparing the fingerprint of the server's public key with the fingerprint stated above. The server public key is located in the /etc/ssh/ssh_host_algorithm_pub.key file. Having this file (perhaps mailed by the admin of the server) you can generate the fingerprint using 'ssh-keygen -l -f public_key_file' and compare the values with the fingerprint above.

2. How to create new keys
If you're in a situation where your server keys have been compromised, you can generate a new pair of ssh public & private keys in the server, using ssh-keygen or open-ssl for example: 'ssh-keygen -t algorithmname', substitute the algorithmname with RSA or DSA. Use ' Hostkey keyfilename' to assign this key as the new ssh key. It's a good habit to regularly renew your keys just in case that the current key has been compromised.

3. Ssh/scp without password (authentication via PKI / X.509 certificate)
It will be handy to avoid being asked to type password everytime you use scp/ssh. Here are the steps to accomplish this: generate client keys using ' ssh-keygen -t algorithmname' in ~/.ssh directory. Substitute the algorithmname with RSA or DSA. Then copy the public key to ~/.ssh/authorized_keys in the server.



Source: Steve's blog http://soa-java.blogspot.com

References:
Foundations of CentOS Linux by Chivas Sicam and Ryan Baclit


Man in the middle attack http://en.wikipedia.org/wiki/Man-in-the-middle_attack
Convert keys between OpenSsh and OpenSSL http://www.sysmic.org/dotclear/index.php?post/2010/03/24/Convert-keys-betweens-GnuPG%2C-OpenSsh-and-OpenSSL

Wednesday, August 17, 2011

Tracer bullet software development


The principle behind its name:
by tracing your bullet, you can see where it's going so that you can adjust your aim to the target better.

The steps:
• define the highlevel subsystem objects (e.g. UI client, database access layer), by all the developers in the team instead of just an architect
• the developers define the interfaces of these objects & how they communicate (e.g. via webservice)
• implement the interfaces with mock objects, integrate early (proof of concepts how the subsystem communicate)
• implement tests with user's scenarios & canned data
• implement the functional code (start with the hardest problems/new technology first), only accept working code (which not breaking the test)
• refactor & refine

SOA & spring lend themselves to this method. In the wsdl-first SOA development you inherently start with defining the interfaces (via wsdl contracts). In Spring you can start with defining the interfaces and later bring in the implementations using dependency injections.

The benefits:
• teams/developers can work in parallel
• the whole teams/developers understand the architecture
• promote communications between teams/developers
• doesn't waste time with unproven low level designs
• you can give demos to the customer earlier to get earlier feedbacks
• the application management (technisch applicatiebeheer) can test the integration between subsystems earlier thus reducing the risk that the project will be late in the product acceptation/deployment phase or that the developer need to redesign and reimplement parts of the systems.
• the QA team can test the performance & security earlier for earlier feedbacks

Source: Steve's blogs http://soa-java.blogspot.com/

Any comments are welcome :)



Source: Steve's blog http://soa-java.blogspot.com

References:
The Pragmatic Programmer


Ship It!

Tuesday, June 21, 2011

Software (architecture) design document (technische ontwerp)


Source: Steve's blogs http://soa-java.blogspot.com/

Why writing design documents:
• to communicate design decisions and why
• to document the benefits & risks of your design
• to serve as a written contract (between you, your manager, your team, product manager, client / product manager): to limit changes, #works, risks.
• to provide common terminologies
• to facilitate peer review / feedbacks from shareholders, to minimize unexpected risks by addressing them before implementing the code




An exhaustive table of contents:

1. doc purpose, terminology, reference, distribution list, version,

2. high level summary: purpose/problem/why, who will use, gap analysis (condition now vs advice/solution)

3. scope: benefits, assumption, risks/issues, relation with other projects/dependency, standards
4. requirements:
o use cases:
 actors
 trigger
 preconsitions
 postconditions
 priority
 level: user-goal/sub-function/summary
 frequency
 type: interactive/batch/interface
 flow (basic flow, alternative flow, error handling flow): what the user does & system response, not how/why
 data dictionary (e.g. what the user enters in the UI):
o field name
o type (input,output)
o required y/n
o format: Numeric, texts, y/n, enums
o validation
 special requests (e.g. browsers & resolutions, availability)
 storyboard: to show user expectation of system behaviour e.g. GUI information presented to the user/entered by users, actions/requests which users can perform, screenshots
o other (non-functional) requirements (e.g. performance, legal, licence, security, tools, standards, compatibility with legacy system, 3rd party, OS/environment)

5. high level design/ system architecture (4N+1+more):
o logical view: list of main elements: roles/responsibilities/interactions, architecture diagram, organization (subsystems, layers), frameworks
o design constraints: application type (e.g. web apps, webservice), architecture style (e.g. layered, domain driven, soa), technologies (e.g. languages/framework, database vendor, OS) , compatibility, dependency, corporate policies, standards
o design trade offs/rationale / use-case view / traceability, for example:





o implementation view: artifacts/executables, module/package structure
o process view (concurrency, synchronization)
o (human) business process: (can refer to use-case)
 forms: sample forms, handling
 procedures: trigger/conditions, handling-steps/order/process-diagram, data needed, business rules, expected results, time limit, error handling
o quality attributes:
 security concerns: authentications/authorization mechanism, encryption, password (min strength, expiration), database/file-system access level (read only, write)
 performance (e.g. response time, load/throughput): goal (e.g. response 5 sec with 100 sessions), degradation mode (e.g. response 5-10sec with 150 sessions), measure, correction action (e.g. if timeout then show a "please try again later" page)
 reliability: transaction/locking, validation, defect rate (e.g. product is accepted when no critical bugs left in the buglist), accuracy, recovery, restart, mtbf (max time before fail), mttr (max time to recover) , error handling, logs, troubleshooting / error code
 usability/user friendliness: resolution, browser, font/color, standardization of GUI components & terms, help/user-manual, max time to complete task, training requirement)
 maintainability/scalability: log, doc, standard, parameterization (e.g. internationalization, changing contents/config)
 availability
 reusability
o crosscutting concerns (e.g. cache, authentication, communication, exception management, log) and how to address this issue (e.g. aspect oriented programming)
o test: risk-level & tests per design aspects/requirements/usecase ,techniques/framework (e.g. selenium GUI test, jmeter stress test)
o deployment view: hardware/networks/software configuration (e.g. database, mds, firewalls, clusters, soa/clouds configuration), compatibility, protocols (e.g. https, soap), deployment-settings, configuration management (e.g. via console/file/centralised-server), installation procedure

6. low level design (e.g. for GUI/presentation layer, business classes, webservices, database layer):
for each subsystems describes:
o role/function
o artifacts (e.g. jar/war/dll), how to be used (e.g. webservice, lib, web application)
o input/output, interface (e.g. webservice or lib)
o constraints: dependency, framework
o class diagram
o sequence diagram, process view (concurrency, synchronization)
o business process diagram
o error handling
o configuration (e.g. hardware/software/version needed, wsdl)
o for UI components: screenshots, screen objects, actions, events, files/classes (templates/jsp/php), resolutions & browsers
o for data objects: how to store/datasource (e.g. database/file), process, data(class) dictionary (type, description, attributes, methods), tabel dictionary (data type, keys/contraints), data model/tabel relationship diagram, accessibility/security
o unit tests

7. planning: development schedule/time, development cost, development organisation (e.g. resources/skills-needed/roles, raportage/meetings), procurement/cost(hardware/software/tools/licence/workplace/training)




Tips:
• some items in the content list above are optional and can be removed due to duplications (with similar items within this document or with other documents such as statement of work/SOW, product requirement doc/PRD, plan)
• you can separate the contents to several documents for several target audiences (e.g. use case doc, PRD, software architecture doc, product acceptance plan, test plan, development plan). Personally for small projects (less than 100 thousands euros / 400 man hours) I prefer to write a single document as concise as possible (less than 20 pages) instead of writing 6 separated documents.
• you don't have to be 100% UML compliance, the diagrams are just supplements
• don't try to be perfect in the first iteration, the design document is a living document, keep your document lightweight/easy to update
• provide unique identifier for design elements (e.g. usecase, requirements) so you can refer to
• Suppose as an architect you need to write a design document for your developers, how much details should you put in your design document? The amount of design work and the formality of that work depend on two factors: the expertise of the project’s developers and the difficulty of the project. If the project has expert programmers and is a simple project, the design can be less detailed. But if the project has inexperienced programmers, uses unfamiliar or untested technology or demands high reliability, then a more detailed design approach may be warranted.

Any comments are welcome :)




References:
• Software Project Survival Guide by McConnell

• Microsoft Application Architecture Guide

• Applying UML and Patterns by Larman

• http://blog.slickedit.com/2007/05/how-to-write-an-effective-design-document/
• How to Write a Software Design Document by Alissa Crowe-Scott http://www.ehow.com/how_6734245_write-software-design-document.html
• RUP op maat by Dekker