Saturday, July 25, 2015

Dirty Read,Phantom Read and Non Repeatable Read

Dirty Read:-
Dirty read occurs when one transaction is changing the record, and the other transaction can read this record before the first transaction has been committed or rolled back. This is known as a dirty read scenario because there is always the possibility that the first transaction may rollback the change, resulting in the second transaction having read an invalid data.
Dirty Read Example:-
Transaction A begins.
UPDATE EMPLOYEE SET SALARY = 10000 WHERE EMP_ID= ‘123’;
Transaction B begins.
SELECT * FROM EMPLOYEE;
(Transaction B sees data which is updated by transaction A. But, those updates have not yet been committed.)
Non-Repeatable Read:-
Non Repeatable Reads happen when in a same transaction same query yields to a different result. This occurs when one transaction repeatedly retrieves the data, while a difference transactions alters the underlying data. This causes the different or non-repeatable results to be read by the first transaction.
Non-Repeatable Example:-
Transaction A begins.
SELECT * FROM EMPLOYEE WHERE EMP_ID= ‘123’;
Transaction B begins.
UPDATE EMPLOYEE SET SALARY = 20000 WHERE EMP_ID= ‘123’;
(Transaction B updates rows viewed by the transaction A before transaction A commits.) If Transaction A issues the same SELECT statement, the results will be different.
Phantom Read:-
Phantom read occurs where in a transaction execute same query more than once, and the second transaction result set includes rows that were not visible in the first result set. This is caused by another transaction inserting new rows between the execution of the two queries. This is similar to a non-repeatable read, except that the number of rows is changed either by insertion or by deletion.
Phantom Read Example:-
Transaction A begins.
SELECT * FROM EMPLOYEE WHERE SALARY > 10000 ;


Transaction B begins.
INSERT INTO EMPLOYEE (EMP_ID, FIRST_NAME, DEPT_ID, SALARY) VALUES (‘111′, ‘Jamie’, 10, 35000);
Transaction B inserts a row that would satisfy the query in Transaction A if it were issued again.



source: http://www.ongoinghelp.com/difference-between-dirty-read-non-repeatable-read-and-phantom-read-in-database/

Tuesday, May 19, 2015

Difference between Static vs Volatile

Question:
Getting a  static variable value is also going to be one value for all threads, then why should we go for volatile? I found following example :
public class VolatileExample {  
public static void main(String args[]) {  
    new ExampleThread("Thread 1 ").start();  
    new ExampleThread("Thread 2 ").start();  
}  

}  
class ExampleThread extends Thread {  
private static volatile int testValue = 1;  
public ExampleThread(String str){  
    super(str);  
}  
public void run() {  
    for (int i = 0; i < 3; i++) {  
        try {  
            System.out.println(getName() + " : "+i);  
            if (getName().compareTo("Thread 1 ") == 0)  
            {  
                testValue++;  
                System.out.println( "Test Value T1: " + testValue);  
            }  
            if (getName().compareTo("Thread 2 ") == 0)  
            {  
                System.out.println( "Test Value T2: " + testValue);  
            }                 
            Thread.sleep(1000);  
        } catch (InterruptedException exception) {  
            exception.printStackTrace();  
          }  
       }  
       }  
       }
Output:
Thread 1 : 0
Test Value T1: 2
Thread 2 : 0
Test Value T2: 2
Thread 1 : 1
Test Value T1: 3
Thread 2 : 1
Test Value T2: 3
Thread 1 : 2
Test Value T1: 4
Thread 2 : 2
Test Value T2: 4
If I remove the static from the testValue, result obtained:
Thread 1 : 0
Test Value T1: 2
Thread 2 : 0
Test Value T2: 1
Thread 1 : 1
Test Value T1: 3
Thread 2 : 1
Test Value T2: 1
Thread 1 : 2
Test Value T1: 4
Thread 2 : 2
Test Value T2: 1
Why thread 2 is not reading the updated value? If it has to be made static , whats the use of volatile?
Can someone give link to a good example of volatile where that variable is not declare static.

Answer:

Declaring a static variable in Java, means that there will be only one copy, no matter how many objects of the class are created. The variable will be accessible even with no Objects created at all. However, threads may have locally cached values of it.
When a variable is volatile and not static, there will be one variable for each Object. So, on the surface it seems there is no difference from a normal variable but totally different from static. However, even with Object fields, a thread may cache a variable value locally.
This means that if two threads update a variable of the same Object concurrently, and the variable is not declared volatile, there could be a case in which one of the thread has in cache an old value.
Even if you access a static value through multiple threads, each thread can have its local cached copy! To avoid this you can declare the variable as static volatile and this will force the thread to read each time the global value.


Sunday, May 17, 2015

How LinkedHashMap works Internally

LinkedHashMap

In this article, we will look into the internal workings of LinkedHashMap.

LinkedHashMap vs. HashMap

LinkedHashMap is a HashMap that also defines the iteration ordering using an additional data structure, a double linked list. By default, the iteration order is same as insertion-order. It can also be the order in which its entries were last accessed so it can be easily extended to build LRU cache.

Data Structure

The data structure of LinkedHashMap extends that of HashMap.
In HashMap, the data structure is based on array and linked list. An entry finds its location in the array based on its hash value. If an array element is already occupied, the new entry replaces the old entry and the old entry is linked to the new one.
In HashMap, there is no control on the iteration order.
In LinkedHashMap, the iteration order is defined, either by the insertion order or access order.
LinkedHashMap differs from HashMap in that it maintains a doubly-linked list running through all of its entries. The below one is a modified example of the above data structure. It defines the iteration ordering based on the order in which keys were inserted into the map. In order to do so, the entry element is extended to keep track of the after and before element. A zero size LinkedHashMap contains just the Head element with before and after pointing to itself.
LinkedHashMap data structure
LinkedHashMap data strutcture
Below is the HashMap data structure:
HashMap Data Structure
HashMap data structure

 Entry

LinkedHashMap's Entry extends the HashMap's Entry so it also inherits the same properties key, value, hash and the next Entry sharing the index. Other than these, it also has couple of additional properties to maintain the double-linked list, after and before entries.
LinkedHashMap Entry class
LinkedHashMap Entry class

New Entry

LinkedHashMap inherits HashMap so its internal data structure is same as that of HashMap. Apart from that it also maintains a double-linked list which is circularly linked via the sentinel node called head. Each node contains references to the previous and to the next node . A new node is always added to the end of the list. In order to do so, the last node’s and the header node’s links have to be adjusted.
  1. The new node’s next reference will point to the head.
  2. The new node’s previous reference will point to the current last node.
  3. The current last node’s next reference will point to the new node instead of head.
  4. Head’s previous reference will point to the new node.
?
1
2
3
4
after  = head;
before = head.before;
before.after = this;
after.before = this;
Double linked-list
Double linked-list
Performance is likely to be just slightly below that of HashMap, due to the added expense of maintaining the linked list.

Access Ordered

A special LinkedHashMap(capacity, loadFactor, accessOrderBoolean) constructor is provided to create a linked hash map whose order of iteration is the order in which its entries were last accessed, from least-recently accessed to most-recently. Invoking the put or get method results in an access to the corresponding entry. If the enclosing Map is access-ordered, it moves the entry to the end of the list; otherwise, it does nothing.
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public void testLinkedHashMap() {
    LinkedHashMap lru = new LinkedHashMap(16, 0.75f, true);
    lru.put("one", null);
    lru.put("two", null);
    lru.put("three", null);
 
    Iterator itr = lru.keySet().iterator();
    while (itr.hasNext()) {
        System.out.println(itr.next());
    }
 
    System.out.println("** Access one, will move it to end **");
    lru.get("one");
 
    itr = lru.keySet().iterator();
    while (itr.hasNext()) {
        System.out.println(itr.next());
    }
 
    System.out.println("** Access two, will move it to end **");
    lru.put("two", "two");
 
    itr = lru.keySet().iterator();
    while (itr.hasNext()) {
        System.out.println(itr.next());
    }
}
?
1
2
3
4
5
6
7
8
9
10
11
12
Result:
one
two
three
** Access one, will move it to end **
two
three
one
** Access two, will move it to end **
three
one
two
Thus in access-ordered linked hash maps, merely querying the map with get is a structural modification.

Iterator

In HashMap, the iterator has to traverse through each table element and the element’s own linked list, requiring time proportional to its capacity.
In LinkedHashMap, it simply has to traverse through its own double-linked list thus requires time proportional to the size of the map and not its capacity so HashMap iteration is likely to be more expensive.

Transfer

Re-sizing is supposed to be faster as it iterates through its double-linked list to transfer the contents into a new table array.
?
1
2
3
4
5
6
7
8
void transfer(HashMap.Entry[] newTable) {
    int newCapacity = newTable.length;
    for (Entry e = header.after; e != header; e = e.after) {
        int index = indexFor(e.hash, newCapacity);
        e.next = newTable[index];
        newTable[index] = e;
    }
}

Contains Value

containsValue() is Overridden to take advantage of the faster iterator.

LRU Cache

If access ordered is true, order of iteration is the order in which its entries were last accessed, from least-recently accessed to most-recently (access-order). This kind of map is well-suited to building LRU caches. The removeEldestEntry(Entry)method may be overridden to impose a policy for removing stale mappings automatically when new mappings are added to the map. For example,
?
1
2
3
protected boolean removeEldestEntry(Map.Entry eldest) {
    return size() > maxCacheSize;
}
The eldest entry is returned by header.after. The default implementation of removeEldestEntry() returns false.

Access Order

The main reason why one prefers LinkedHashMap over HashMap is that it can retain the order in which the elements are accessed. Below is the basic flow that a HashMap goes through to put a new entry. The blue boxes, ‘Add Entry’ and ‘Record Access’ are the ones LinkedHashMap overrides.
LinkedHashMap
LinkedHashMap
It overrides ‘Record Access’ to record the access order. If the user is interested in the access order, it updates its double linked list.
LinkedHashMap
LinkedHashMap
Below entry picture shows how the entry moves up the linked list. Head’s next entry will point to the latest entry accessed.
LinkedHashMap
LinkedHashMap
If E2 is accessed again, HashMap identifies the entry and then calls record access.
The record access is overridden in LinkedHashMap. Below is the class diagram where LinkedHashMap extends HashMap’s entry to override the recordAccess.
LinkedHashMap
LinkedHashMap

Double linked list vs Single Linked List

LinkedHashMap
LinkedHashMap
Suppose we want to remove an entry E2.
If we have a single linked list, E3’s next should point to E1 which means if want to eliminate E2, we need to know its previous entry. If it is a single linked list, we will end up traversing the entire list to search the entry previous to E2, thus the performance would in the O(n)
In case of double linked list, the previous pointer in E2 will take us to E3 in O(1).


Source: http://javarticles.com/2012/06/linkedhashmap.html