Friday, September 12, 2008

Java Variables

The Java programming language defines the following kinds of variables:

  • Instance Variables (Non-Static Fields) Technically speaking, objects store their individual states in "non-static fields", that is, fields declared without the static keyword. Non-static fields are also known as instance variables because their values are unique to each instance of a class (to each object, in other words); the currentSpeed of one bicycle is independent from the currentSpeed of another.

  • Class Variables (Static Fields) A class variable is any field declared with the static modifier; this tells the compiler that there is exactly one copy of this variable in existence, regardless of how many times the class has been instantiated. A field defining the number of gears for a particular kind of bicycle could be marked as static since conceptually the same number of gears will apply to all instances. The code static int numGears = 6; would create such a static field. Additionally, the keyword final could be added to indicate that the number of gears will never change.

  • Local Variables Similar to how an object stores its state in fields, a method will often store its temporary state in local variables. The syntax for declaring a local variable is similar to declaring a field (for example, int count = 0;). There is no special keyword designating a variable as local; that determination comes entirely from the location in which the variable is declared — which is between the opening and closing braces of a method. As such, local variables are only visible to the methods in which they are declared; they are not accessible from the rest of the class.

  • Parameters You've already seen examples of parameters, both in the Bicycle class and in the main method of the "Hello World!" application. Recall that the signature for the main method is public static void main(String[] args). Here, the args variable is the parameter to this method. The important thing to remember is that parameters are always classified as "variables" not "fields". This applies to other parameter-accepting constructs as well (such as constructors and exception handlers) .
Having said that, the remainder of this tutorial uses the following general guidelines when discussing fields and variables. If we are talking about "fields in general" (excluding local variables and parameters), we may simply say "fields". If the discussion applies to "all of the above", we may simply say "variables". If the context calls for a distinction, we will use specific terms (static field, local variables, etc.) as appropriate. You may also occasionally see the term "member" used as well. A type's fields, methods, and nested types are collectively called its members.

Thursday, April 10, 2008

Testing Web Services
















Solid testing techniques are essential for developing robust Web services because Web services' flexibility and connectivity provide an increased opportunity for errors. Problems can be introduced in any of a service's multiple layers, and even the slightest mistake can cause the entire service to fail.


In order for a complete Web service to deliver the promised functionality, both the client and the service must satisfy a number of requirements. Interfaces must be correctly described in a WSDL document. Messages must conform to both the transport protocol specification (such as HTTP 1.1) and the message protocol (such as SOAP 1.1). Messages must also conform to the contract specified in the WSDL describing the service, both in terms of the message content and the binding to the transport layer. Add to the mix security provisions, interoperability issues, UDDI registration requirements, and performance requirements under load, and it is easy to see why Web service testing is not a trivial matter.


This blog explains general best practices that developers of Web service servers and/or clients can apply to ensure service functionality, interoperability, and security. For developers of Web services (producers), it explains potential problems and describes techniques for exposing those problems. For developers of Web service clients (consumers), it describes techniques for verifying that the client correctly connects to the server, sends a correct message, and gracefully handles fault conditions. In addition, it discusses interoperability, security, and UDDI registry issues that affect both Web service producers and consumers. The bulk of the discussion assumes the use of WSDL for describing the service, HTTP for the transport layer, and SOAP for the messaging layer.



Server TestingThere are three main categories of Web service testing:
Functional testing: Verifies that the service functions correctly
Regression testing: Detects whether a regression is introduced
Load testing: Verifies whether the service meets performance and functional requirements under load


We will explore each type of testing in the sections that follow.
Functional Testing


Functional testing is typically the first step in testing a Web service server. (If the server does not work correctly, its performance, security, interoperability, etc., are essentially irrelevant.) The goal of this testing is fairly straightforward: to ensure that the server delivers appropriate responses for the given requests. However, due to the complexity of Web services, this task is far from simple. With most Web services, it is impossible to anticipate exactly what types of requests clients will send. Enumerating all possible requests is not feasible because the space of possible inputs is either unbounded or intractably large. As a result, it is important to verify whether the server can handle a wide range of request types and parameters.


There are two main steps to each functional test:




1. A test client sends a request to the server over an HTTP connection. This involves determining what types and ranges of requests need to be tested to determine whether the server will react appropriately to the wide variety of requests it might receive. Once you have determined what requests to send, tools such as those available in WebSphere Studio Application Developer (WSAD) can facilitate the creation and execution of test clients.




2. The response is analyzed for correctness (either by inspection or by running the response through a tool or script that verifies conformance to a specification). This analysis can be as simple as performing a text comparison with the expected response or as complex as extracting specific information from an XML document and performing application-specific checks.The simplest possible functional test involves sending a request and checking whether the server returns a response or an error message. For example, assume we have a sample employee Web service that allows queries by last name and returns the results in the form of an XML document. The most basic functional test would involve sending a valid input parameter (a last name entered into the system) and checking whether a response or an error message was returned.


Although these types of simple tests provide an adequate way to begin testing, they cannot verify the service's more complex functionality requirements. Fully testing even this simple service's functionality requires checking for all of the following notions of failure.




1. The attempt to open a socket to the URL of the Web service fails. This indicates a network problem or an incorrect URL or IP address.


2. The Web service returns a fault, such as

This indicates an error caused by the server or by the client, depending on the type of fault.


3. The Web service responds and does not return a fault, but the responding message is not readable by the client because of an interoperability issue. For example, either the server or the client (or both) might not resolve XML namespaces in accordance with the standard.


4. A response is received, but not in the expected format. For example, the response is in an incorrect XML format or some other arbitrary text format. This type of error can be detected by validating XML with respect to an XML Schema.


5. A response is received in the format expected, but the data contained is incorrect (for example, when we request records for Fett but receive records for Kenobi).


The symptoms of the first three problems are independent of any particular Web service because they fail at the levels of the HTTP and SOAP protocols, which are consistent across Web services. The fourth and fifth problems can also arise in any Web service, but their details, and therefore their detection, are necessarily application specific.




For example, a different Web service might accept a ticker symbol and return a stock quote. In this scenario, an expected response (SOAP envelope omitted for clarity) might be
16.87
or perhaps



This is an example of receiving a response in an unexpected format. The details of checking for this type of error depend on the specific service because different services can have different response types.


Each of the potential failure types exhibits different symptoms when encountered in the test client. The first three problems typically result in exceptions; ideally, the client will catch these exceptions, record them as test failures, and continue testing. The format can be verified by parsing the response with a validating XML parser, or in any other way that your testing infrastructure allows. Detecting incorrect results in the correct format is the most application-specific test. It generally requires using tools that allow you to make sophisticated assertions about the service's responses, or writing code that parses the XML and tests for constraints. In the case of the incorrect last name from the employee service, the test needs to verify that the lastName attribute of each Employee element matches the last name specified for that particular query. The best way to implement this verification depends on your test client and your verification capabilities. Although WSAD does not currently provide this level of verification, its functionality can be extended with third-party tools such as Parasoft SOAPtest. Another approach is to write an XSL file that outputs an error message when applied to nonconforming outputs.



While you are performing functional testing, remember that a Web service has multiple layers and that errors can be introduced at each layer. There are transport-level errors, such as an incorrect content length specified in an HTTP header, message-level errors, such as an invalid SOAP envelope, and application-level errors, such as a getStockPrice operation returning the price for the wrong ticker symbol. Keeping the layers in mind is helpful for generating adequate test coverage, as well as for debugging failures.



Also, be aware that the WSDL can be another source of errors. If the WSDL permits a wider class of inputs than the application, it is increasingly vulnerable to erroneous input at the application level. Ideally, a service's robustness would be tested by using the type definitions from the WSDL to generate all possible inputs and send each combination to the server. In practice, this is not feasible because the input space is usually much too large. A more pragmatic goal is to cover a representative portion of the input space.



After you confirm that the server handles expected requests correctly, perform fault checking to see how it handles unexpected input. The system will inevitably be faced with unexpected requests either as a result of mistakes (such as a bad WSDL) or from attempts to breach service security. (Hackers sometimes trick applications into behaving unexpectedly by sending invalid inputs.) Performing this fault checking involves sending the service requests with illegal and/or unexpected parameters, then verifying the response with assertions, custom code, or other tool-specific verification methods. The expected service behavior in these situations can depend on the stage of development as well as whether the Web service is intended for public versus internal use. For an internal service, it might make sense for the service to display its stack trace when a runtime error occurs, because the stack trace offers very valuable information for debugging. For a publicly exposed Web service, displaying the stack trace is arguably undesirable because it provides additional information about your implementation details (details that you would prefer hackers not know).


WSAD offers considerable flexibility for producing Web service clients. A standard client (shown in Figure 1) can be generated concurrently when Web services are generated and deployed.
If needed, this standard client can be customized graphically or programmatically in the built-in JSP editor (shown in Figure 2). The combination of these client-generation and customization options provides the opportunity to perform a broad range functionality testing.


Regression Testing


After you have verified the server's functionality, rerun the functional test suite on a regular basis to ensure that modifications do not cause unexpected changes or failures. A common technique is to send various requests, manually confirm the responses, and then save them as a regression control. These regression tests can be incorporated into a regular automated build process. When regression tests are run frequently, regressions are easy to fix because they can be directly attributed to the few changes made since the last time the test was run. WSAD does not currently provide an explicit regression testing feature, but this capability can be added by extending WSAD with additional tools.


Load Testing


The next step in the server testing process is load testing. The goal of load testing is to verify the performance and functionality of the service under heavy load.
The best way to start load testing is to have multiple test clients run the complete functional test, including request submissions and response verifications. When load testing ignores the functionality verification process and focuses solely on load-rate metrics, it risks overlooking critical flaws (such as functionality problems that surface only under certain loads).


To thoroughly test the service's performance, run the functional test suite under a variety of different scenarios to check how the server handles different types of loads. For example, the test could check functionality and response time under different degrees of load increases (sudden surges versus gradual ramp-ups) or different combinations of valid and invalid requests. If the load tests reveal unacceptable performance or functionality under load, the next step is to diagnose and repair the source of the bottleneck. Sometimes, the problem is caused by a fundamental algorithmic problem in the application, and the repair could require something as painful as an application redesign and rewrite. Other times, it is caused by some part of the infrastructure (the Web server, the SOAP library, the database, and so forth). In these cases, fixing the problem might be as simple as changing a configuration or as complex as changing the architecture. Because fixing performance problems sometimes demands significant application or system changes, it is best to start load testing as soon as possible. By starting early, you can diagnose and fix any fundamental problems before it is too late to do so without a major rewriting or rebuilding nightmare.


WSAD does not currently provide load testing functionality. It allows you to generate a large load by writing custom client code that uses a loop, but this is not the preferred approach. Load testers provide more control over how the load is generated because they allow you to control parameters such as test duration and load size.


Client Testing


SOAP client developers are responsible for ensuring that the client sends requests properly. If a client sends invalid or improperly formed requests, the server usually cannot deliver the expected results. The process of testing clients is a little different from testing services because clients are the initiators of Web service interactions. This means that from a testing standpoint, there are two main things to verify: whether the client can correctly initiate an interaction by sending a request, and whether the client behaves correctly when it receives a response. Note that the second part requires inspection of the client application; it cannot generally be determined by merely observing wire traffic.
The best way to test a particular client depends on the nature of the application. If the client accesses a server that can accept "test" requests with no harmful side effects, it can directly access the live server during testing. If the server is not yet available or should not be sent test inputs, the client can access an emulated server or server stubs during testing.


No matter what type of server a client accesses, the same general principle applies: the client sends a request, the server responds, then client success or failure is determined by recording and verifying the request and/or by verifying the server response. (The same techniques and tools used to verify server functionality can be used for this purpose.) Of course, server bugs could mislead you: if the server is not operating correctly, correct client requests might result in incorrect responses, and incorrect requests might result in apparently correct responses. You can ensure that server functionality problems are not confusing your results by (1) verifying the request as well as the response, and (2) testing the simplest possible server implementations (server stubs) instead of - or in addition to - testing actual, complex servers.


After you verify that the client sends acceptable requests and can receive responses, shift to testing exceptional cases. For example, test that the client behaves properly when the server goes offline by sending the response to an invalid URL. Or use server stubs to simulate the server sending the client invalid data.


Although WSAD does not currently offer a direct client testing feature, it is possible to test a particular client by writing a test service that performs the desired analysis on the client request.




Other Testing Considerations
Functional testing and load testing are the most fundamental types of testing for Web services. Depending on the type of service being tested and its requirements, it might be necessary to address additional issues during the testing process. Some issues that might further complicate many developers' testing are interoperability, security, and UDDI registry use.


Interoperability


A driving force behind Web services is the promise of seamless interoperability for disparate programming languages, operating systems, and various runtime environments. Unfortunately, the mere adoption of technologies that promote this idea (XML, SOAP, WSDL, UDDI) does not make the promise a reality.


Ideally, interoperability would be verified by checking that a service adheres to a comprehensive, universally implemented set of standards. However, the existing W3C recommendations are still evolving. Furthermore, the technologies are flexible enough to provide implementers with a myriad of options (document versus RPC-style, SOAP encoding versus literal encoding, different array representations, different versions of HTTP, SOAP, WSDL, UDDI, etc.). Flexibility is generally beneficial, but if everyone chooses a different way to do things, it does not serve the goal of interoperability. As options proliferate, it becomes increasingly unlikely that any given vendor solution will completely conform to all aspects or options allowed by the standard.


Given the reality that not all of the standards today are fully developed or consistently implemented, one of the most pragmatic approaches to interoperability is the one taken by the Web Services Interoperability Organization (WS-I). WS-I, though not itself a standards body, intends to serve as a standards integrator by developing a core collection of profiles that are a subset of the various Web service technologies. By restricting development to technologies specified in WS-I profiles, developers can increase the odds that their systems will interoperate with other systems. Development tool companies are already working with the WS-I to develop tools that automatically check compliance with these profiles and, in the event of noncompliance, pinpoint exactly what needs to be changed to ensure compliance. Expect to see tools that check compliance with these profiles soon after the profiles are officially released.


Security


Web services security is not a single problem, but rather a host of interrelated issues. For any given application, some of the issues will be critical, while others may be of lower priority or even irrelevant. Some facets of security worth considering when deploying Web services are:
Privacy: For many services it is important that messages are not visible to anyone except the two parties involved. This means traffic will need to be encrypted so that machines in the middle cannot read the messages.
Message Integrity: Provides assurance that the message received has not been tampered with during transit.
Authentication: Provides assurance that the message actually originated at the source from which it claims to have originated. You may need to not only authenticate a message, but also prove the message origin to others.
Authorization: Clients should only be allowed to access services they are authorized to access. Authorization requires authentication because without authentication, hostile parties can masquerade as users with the desired access.


Security impacts testing requirements in two important ways. First, any security requirements for a Web service naturally translate into testing requirements. If a service requires a certain level of privacy, or if it requires that messages be authenticated in a certain way, then specific tests are needed to ensure that these security requirements are met. The second way that security impacts testing is subtler. To some extent, security schemes will complicate the process of testing and debugging the basic functionality. For example, nonintrusive monitors can often aid in functional testing as well as load testing. Encrypted traffic presents an obvious complication to this approach to testing.


UDDI


Thus far, we have not addressed how publishing and discovering services fits into testing. UDDI is not yet as mature a technology as some of the others discussed in this article, but it is evolving and gaining acceptance. Services registered in a UDDI registry that are discovered and bound dynamically have all the testing requirements that we have already discussed, plus the find and bind features require additional testing. It is helpful to consider UDDI testing in two pieces: the registry implementation and the entries within the registry. Most users will not be implementing their own UDDI registry, so the primary focus will be on testing the content of the registry. The most reliable way to test the content is to write test clients that perform inquiries on the registry, and then use the registry data to actually invoke the service. Functional testing of services can then be extended to include dynamic binding to endpoints specified in a UDDI registry. This ties together the functional testing of the registration with the service implementation. The current version of WSAD provides full UDDI support, which allows both querying and registration.


Conclusion


By integrating the discussed testing practices into the Web service development process, you can ensure that a Web service server works well with the possible types and volumes of client requests, and that a Web service client correctly accesses and retrieves whatever data a service has to offer. You can start implementing the discussed practices at any point in the development process, but if you start testing early, you will maximize your ability to prevent errors as well as detect errors. Typically, the earlier you detect a problem, the easier it is to fix it, and the less chance you and your team members have to inadvertently worsen the problem by building code or components that interact with the problematic element, or by reusing the problematic element for other servers or clients. If you start your testing as early as possible, then continue using the related tests as a regression test suite throughout development, you will not only ensure the client's or server's continued reliability, but also streamline the development process.

Saturday, March 15, 2008

Servlet Instantiation

Step 1.
Container reads the Deployment descriptor and read the servlets class name element value
say for example:


<servlet>
<servlet-name>MyServlet</servlet-name>
<servlet-class>com.mycompany.MyServlet</servlet-class> <!-- This value will be read.
<init-param>
<param-name>myVariable</param-name>
<param-value>1</param-value>
</init-param>
</servlet>

Step 2:
The container loads the class
Class clazz = Class.forName("com.mycompany.MyServlet");

Step 3:
Creates an Instance of the Servelt class,
Servlet objServlet = clazz.newInstance();

The above step will invoke the default contstructor (Zero parameter constructor of the servelt class. The spec dictates that Servlet Code should have a constructor with zero paremeter i.e default constructor).

Step 4:
The container then reads the initialization parameter "init-param" xml tags, and constructs a ServletConfig object and inovkes the
objServlet.init(config)


In simple words the ServletContainer instantiates your servlet. Since ServletContainer does not know what are the parameters to be passed, it calls default constructor i.e. with no arguments

A servlet is still just a class, so of course you can instantiate it. You can even have constructors taking arguments.

like..MyServlet = new myServlet();


But for the container to be able to use your servlet, it needs a public no-argument constructor. It would also not really make sense to instantiate the servlet yourself, because the container is responsible for the life cycle.

Saturday, March 8, 2008

The Exception Controversy

Definition: An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions

When an error occurs within a method, the method creates an object and hands it off to the runtime system. The object, called an exception object, contains information about the error, including its type and the state of the program when the error occurred. Creating an exception object and handing it to the runtime system is called throwing an exception.
After a method throws an exception, the runtime system attempts to find something to handle it. The set of possible "somethings" to handle the exception is the ordered list of methods that had been called to get to the method where the error occurred. The list of methods is known as the call stack .
The runtime system searches the call stack for a method that contains a block of code that can handle the exception. This block of code is called an exception handler. The search begins with the method in which the error occurred and proceeds through the call stack in the reverse order in which the methods were called. When an appropriate handler is found, the runtime system passes the exception to the handler. An exception handler is considered appropriate if the type of the exception object thrown matches the type that can be handled by the handler.
The runtime system searches the call stack for a method that contains a block of code that can handle the exception. This block of code is called an exception handler. The search begins with the method in which the error occurred and proceeds through the call stack in the reverse order in which the methods were called. When an appropriate handler is found, the runtime system passes the exception to the handler. An exception handler is considered appropriate if the type of the exception object thrown matches the type that can be handled by the handler.
The Three Kinds of Exceptions
The first kind of exception is the checked exception. These are exceptional conditions that a well-written application should anticipate and recover from.
User supplies the name of a nonexistent file, and the constructor throws java.io.FileNotFoundException. A well-written program will catch this exception and notify the user of the mistake, possibly prompting for a corrected file name.
Checked exceptions are subject to the Catch or Specify Requirement
The second kind of exception is the error. These are exceptional conditions that are external to the application, and that the application usually cannot anticipate or recover from.
For example, suppose that an application successfully opens a file for input, but is unable to read the file because of a hardware or system malfunction. The unsuccessful read will throw java.io.IOError. An application might choose to catch this exception, in order to notify the user of the problem — but it also might make sense for the program to print a stack trace and exit.
Errors are not subject to the Catch or Specify Requirement. Errors are those exceptions indicated by Error and its subclasses.
The third kind of exception is the runtime exception. These are exceptional conditions that are internal to the application, and that the application usually cannot anticipate or recover from. These usually indicate programming bugs, such as logic errors or improper use of an API. For example, consider the application described previously that passes a file name to the constructor for FileReader. If a logic error causes a null to be passed to the constructor, the constructor will throw NullPointerException. The application can catch this exception, but it probably makes more sense to eliminate the bug that caused the exception to occur.
Runtime exceptions are not subject to the Catch or Specify Requirement. Runtime exceptions are those indicated by RuntimeException and its subclasses.
Errors and runtime exceptions are collectively known as unchecked exceptions.

Advantage 1: Separating Error-Handling Code from "Regular" Code
Advantage 2: Propagating Errors Up the Call Stack
Advantage 3: Grouping and Differentiating Error Types

Now..The Exception Controversy

Because the Java programming language does not require methods to catch or to specify unchecked exceptions (RuntimeException, Error, and their subclasses), programmers may be tempted to write code that throws only unchecked exceptions or to make all their exception subclasses inherit from RuntimeException. Both of these shortcuts allow programmers to write code without bothering with compiler errors and without bothering to specify or to catch any exceptions. Although this may seem convenient to the programmer, it sidesteps the intent of the catch or specify requirement and can cause problems for others using your classes.
Why did the designers decide to force a method to specify all uncaught checked exceptions that can be thrown within its scope? Any Exception that can be thrown by a method is part of the method's public programming interface. Those who call a method must know about the exceptions that a method can throw so that they can decide what to do about them. These exceptions are as much a part of that method's programming interface as its parameters and return value.
The next question might be: "If it's so good to document a method's API, including the exceptions it can throw, why not specify runtime exceptions too?" Runtime exceptions represent problems that are the result of a programming problem, and as such, the API client code cannot reasonably be expected to recover from them or to handle them in any way. Such problems include arithmetic exceptions, such as dividing by zero; pointer exceptions, such as trying to access an object through a null reference; and indexing exceptions, such as attempting to access an array element through an index that is too large or too small.

Runtime exceptions can occur anywhere in a program, and in a typical one they can be very numerous. Having to add runtime exceptions in every method declaration would reduce a program's clarity. Thus, the compiler does not require that you catch or specify runtime exceptions (although you can).
One case where it is common practice to throw a RuntimeException is when the user calls a method incorrectly. For example, a method can check if one of its arguments is incorrectly null. If an argument is null, the method might throw a NullPointerException, which is an unchecked exception.
Generally speaking, do not throw a RuntimeException or create a subclass of RuntimeException simply because you don't want to be bothered with specifying the exceptions your methods can throw.

Here's the bottom line guideline: If a client can reasonably be expected to recover from an exception, make it a checked exception. If a client cannot do anything to recover from the exception, make it an unchecked exception.
Howzzaaaaat!!!

Strings Strings Strings

Strings are IMMUTABLE


Strings are value objects. Value objects should always be immutable. If youwant a different value (different String), you need a different object.
String is Immutable String is the most used immutable class in Java. If new Java developers had to deal with a mutable String they may get very frustrated and overwhelmed by the complexity of the object model and give up on Java not long after starting.
String s = "nothing special";
s.toUppercase();

This does not modify the variable s at all. Instead, a new String object is created with all the characters of s changed to upper case. Since the new object is not assigned to anything, it is simply loses all references (as the toUpperCase method has completed) and is eventually garbage collected.

On the other hand, you could write it like this:


String s = "nothing special";
s = s.toUppercase();

In this example, s is now in all upper case letters, but it is not the same object as the one that was instantiated in the first line. The new String object that is returned from toUpperCase becomes assigned to s, and the original object loses its last reference and gets garbage collected.

SAFETY
If step A of a process receives a String and checks it, youdon't want some programmer to modify it before it reaches step B where it's assumed to have been OKed by A.


If they were mutable, imagine doing


getClass().getName().someModifyingOperation();


Good luck finding that class again, Mr. Classloader



Let me first tell you what is StringBuilder. StringBuilder is a class analogous to StringBuffer added in JDK 1.5. This class is designed to use in place where StringBuffer is used by single thread(like in most of the cases). According to documentation, StringBuilder should work faster than StringBuffer. So " thread unsafe, fast".I was reading one of the posts of orkut Java community asking "what is this capacity in StringBuffer and even we can add two strings from String class why to go for StringBuffer". Valid question !


GC need to work little more in case of String, but thats fair. As a java wellwisher let me try to do some publicity of Java API :D. And here it goes:No don't use String class for concatenation operation, always use StringBuffer/StringBuilder and let me tell you why ?


This is a simple Java code for string addition in String and StringBuffer:
class StringTest {
public static void main(String[] args)
{
String s = "just a string";
s = s + "add me too";
System.out.println(s);
/*
StringBuffer s = new StringBuffer("just a string") ;
//StringBuilder s = new StringBuilder("just a string");
s = s.append("add me too");System.out.println(s);
*/
}
}
now have a look on the bytecode of this program
>> javac StringTest.java
>> javap -c StringTest
Just see line no. 11. Interesting, the plus sign we used for addition is not as innocent as it looks. String itself use StringBuffer(StringBuilder) to add two strings and hence taking much more time than normal append operation done by StringBuffer.
Let me give you more evidence, run verbose option and check the time
>> javac -verbose StringTest.java
and check the other one, that is, with StringBuffer one.You can clearly figure out the time difference and make a try with StringBuilder, time should reduce furthermore.

Friday, March 7, 2008

Overidding Static???

How to override static methods ?

I know, I know. You can't override static methods. The title was just a trick to provoke your interest :-) In this post, I'll first try to explain why it is impossible to override static methods and then provide two common ways to do it.
Or rather - two ways to achieve the same effect.
So, what's the problem?

There are situations where you wish you could substitute or extend functionality of existing static members - for example, provide different implementations for it and be able to switch implementations at runtime.

For example, let's consider a static class Log with two static methods:public static class Log
{
public static void Message(string message){ ... }
public static void Error(Exception exception){ ... }
}
Let's say your code calls Log.Message and Log.Error all over the place and you would like to have different logging behaviors - logging to console and to the Debug/Trace listeners. Moreover, you would like to switch logging at runtime based on selected options.


Why can't we override static members?
Really, why? If you think about it, this is just common sense. Overriding usual (instance) members uses the virtual dispatch mechanism to separate the contract from the implementation. The contract is known at compile time (instance member signature), but the implementation is only known at runtime (concrete type of object provides a concrete implementation). You don't know the concrete type of the implementation at compile time.

This is an important thing to understand: when types inherit from other types, they fulfil a common contract, whereas static types are not bound by any contract (from the pure OOP point of view). There's no technical way in the language to tie two static types together with an "inheritance" contract. If you would "override" the Log method in two different places, how do we know which one we are calling here: Log.Message("what is the implementation?")
With static members, you call them by explicitly specifying the type on which they are defined. Which means, you directly call the implementation, which, again, is not bound to any contract.
By the way, that's why static members can't implement interfaces. And that's why virtual dispatch is useless here - all clients directly call the implementation, without any contract.
Let's get back to our problem

Solution 1: Strategy + Singleton

Yes, that's easy. Define a contract to use and it will be automatically separated from the implementation. (Unless you make it sealed. By making a type or a member sealed, you guarantee that no one else can implement this contract.)

OK, so here's our contract:

public abstract class Logger
{
public abstract void Message(string message);
public abstract void Error(Exception exception);
}


You could make it an interface as well, but with an interface you wouldn't be able to change the contract later without breaking existing clients.

You'll only need one instance of a logging behavior, so let's create a singleton:public static class

Log
{
public static Logger Instance { get; set; } ...
}


Correct Singleton implementation is not part of this discussion - there is a whole science of how to correctly implement Singleton in .NET - just use your favorite search engine if you're curious.
Now we just redirect the static methods to instance methods of the Logger instance and presto:

public static void Message(string message)
{
Instance.Message(message);
}


And that's it! All of your code continues to use Log.Message and Log.Error, and if you'd like to change the behavior, just say Log.Instance = new DebugWriteLineLogger();

Thursday, March 6, 2008

Hashmap Cloning

Clone(): Returns a shallow copy of this HashMap instance: the keys and values themselves are not cloned.

“clone”creates and returns a copy of the object. It makes another object of the class in memory. Suppose, We are creating a clone of the object x then x.clone() != x will return true but it is not necessary that x.clone().equals(x) will return true. It depends upon the implementation of class.Class should implement “Cloneable” interface in order to make clone of its objects. Otherwise “CloneNotSupportedException” will be thrown.

Array,HashMap are considered to have implemented the interface “Cloneable”. If by assignment, the contents of the fields of class are not themselves cloned then this method performs shallow copy of this object, not a deep copy operation.

Let's pretend that we can peer deep inside the JVM and actually see thememory locations themselves that are the references. I'll put the memory locations of each object in parentheses.

If we have a hashmap, it might contain 3 K/V pairs:
hashmap1(1000)

keyA(2000)---valueA(2010)
keyB(2020)---valueB(2030)
keyC(2040)---valueC(2050)

If you assign hashmap2 = hashmap1, then you have two references to the samehashmap object. That means that if you were to peer inside hashmap2, youare looking at the exact same hashmap object. Note that using this example,both references point to location 1000:

hashmap2 = hashmap1
-----------------------------------------
hashmap1(1000)
keyA(2000)---valueA(2010)
keyB(2020)---valueB(2030)
keyC(2040)---valueC(2050)

hashmap2(1000)
keyA(2000)---valueA(2010)
keyB(2020)---valueB(2030)
keyC(2040)---valueC(2050)

If you do a shallow clone of hashmap1 instead, you will have two hashmapobjects, but with all the same keys and values:
hashmap2 = (Hashmap)hashmap1.clone();
-----------------------------------------
hashmap1(1000)
keyA(2000)---valueA(2010)
keyB(2020)---valueB(2030)
keyC(2040)---valueC(2050)

hashmap2(3000) <-----note-----
keyA(2000)---valueA(2010)
keyB(2020)---valueB(2030)
keyC(2040)---valueC(2050)

If you were able to do a deep(er) clone (using your own technique) you mightend up two objects, with copies of all the keys and values:

hashmap2 = magicDeepCloneMethod(hashmap1);
-----------------------------------------
hashmap1(1000)
keyA(2000)---valueA(2010)
keyB(2020)---valueB(2030)
keyC(2040)---valueC(2050)

hashmap2(3000)
keyA(4000)---valueA(4010)
keyB(4020)---valueB(4030)
keyC(4040)---valueC(4050)