When java logging API was first introduced in JDK 1.4 in 2002, it caused quite a lot a fuss around, with everybody asking “Why did not they just include Log4j instead of creating their own bastard child?”.
I remember having looked at it very shortly before continuing using Log4j on all projects I have been involved with.
Today, while doing a very small project, I tried once more to use java logging. The main reason is that I was lazy to add a dependency to one more jar for this small project. While trying I found out that:
you still need to use a damned JVM parameter to point to your configuration file
you can not change the formatting without writing a formatter class!
It’s 2009! What has Sun done? I am amazed the most elementary things you expect from a Logger are still not included by default in the JDK.
Black and Scholes gives a strange result for the price of a binary option under high volatility. You will learn here how to simulate a stock price evolution using Java, and how to show it using JFreeChart library. It starts with more complex concepts (don't be afraid) and goes done towards simpler things.
I could not write all that in a blog format, so I created a old HTML page about it here and a PDF version.
There are lots of programs for playing MP3 under linux, a few dealing decently with big libraries. But when you start looking for a program that does crossfade well and manage big libraries easily - there is nothing.
Rhythmbox does some crossfade, but crashes when you move manually in the song. Audacious does some crossfade but regularly crashes with crossfade plugin.
The real alternative are AIMP2 or Foobar2000 in Wine. It is quite incredible that you can have good solid crossfade in wine and not natively in Linux.
Maybe people spent too much time on useless Pulseaudio (I have much less issues using only ALSA).
There is a very interesting MS Research paper about test driven development (TDD). It is one of the only real study about it that I know of. The paper conclusions from experiments over 4 TDD teams vs 4 traditional teams is:
"TDD seems to be applicable in various domains and can significantly reduce the defect density of developed software without significant productivity reduction of the development team"
Their data gives also other interesting results:
An experienced team (5 people over 10 years + 2 people under 5 years) : 155KLOC C# code (+60 test).
A junior team (3 people under 10 years + 6 people under 5 years): 41 KLOC Java code (+28 test).
If you do the ratio of KLOC/man month, you have the following graph: I know this is very far from scientific evidence and more like astrology, but still, the most conservative ratio for senior/junior is 4.23!
In my previous post, I was wondering why single thread was faster. D Andreou gave the correct explanation: as we send only 1 start message and as each node only send 1 message to the next one, there is always only 1 message being processed. So the test is optimum on 1 thread. It does not make much sense to make a multithreading benchmark on a problem that is fundamentally single threaded.
His suggestion was to simple send N start messages where N >= number of processors. In theory, the performance will become optimal with N threads then. Unfortunately this is not what happened in real life. In real life the single threaded performance is still better if you send even 16 messages on a biprocessor machine.
My idea was that it was related to the swiching from thread to thread overhead, which is precisely what I think the original author of the test had in mind to test. I am not 100% convinced it is really what’s happening. I wanted a test that would actually be faster using N threads; so I decided to add a bit of computation at before processing each Token. Unfortunately I had the bad idea to compute Pi by Monte Carlo method to do that. Running my tests I was surprised it did not change the results, and made things worse the most computer intensive the computation was (increasing the number of monte carlo iterations). It scared me a bit wondering what the hell could be wrong there. The following class performs much worse with 2 threads compared to 1:
publicclassBadParallelPi{privatestaticvoidstartExecutors()throws Exception {long startTime = System.currentTimeMillis(); System.out.println(startTime); ExecutorService executor1 = Executors.newFixedThreadPool(1); executor1.execute(new Computation()); executor1.execute(new Computation()); executor1.shutdown(); executor1.awaitTermination(60, TimeUnit.SECONDS);long delay = System.currentTimeMillis()- startTime; System.out.println("finished single thread in "+(delay/1000.0)); startTime = System.currentTimeMillis(); System.out.println(startTime); executor1 = Executors.newFixedThreadPool(2); executor1.execute(new Computation()); executor1.execute(new Computation()); executor1.shutdown(); executor1.awaitTermination(60, TimeUnit.SECONDS); delay = System.currentTimeMillis()- startTime; System.out.println("finished 2 threads in "+(delay/1000.0));}publicstaticclassComputationimplements Runnable {publicvolatileint count = 0;privatedoublecomputePi(){double pi = 0;double x,y;int n = 10000000;for(int i=0;i<n;i++){ x = Math.random(); x *= x; y = Math.random(); y *= y;if(x+y < 1){ pi +=1;}} pi = 4*pi/n;return pi;}publicvoidrun(){double pi = computePi();long time = System.currentTimeMillis(); System.out.println(time+" thread "+Thread.currentThread().getId()+" pi="+pi); count++;}}publicstaticvoidmain(String[] args)throws Exception { startExecutors();}}
Did you figure out why?
It took me less time with this simple code than with the original ring test to find out why. It is simply because of the Math.random call. Math.random only creates one random number generator, and it will be shared among threads. So every thread will wait at the other one at this point. Creating one random generator per thread showed 2 threads were much faster than 1, finally.
Back to the original ring test. Adding the correct way to compute Pi by Monte Carlo, I now had decent test results as long as the number of iterations is not too small. 10 iterations is enough to show a real difference between N threads and 1. Adding a small computation helps figuring out what happens behind the scene. You can also verify D Andreou claim, using only 1 start message the single threaded version is faster. If computation is too weak (for example number of Monte Carlo iteration of 0, one only measures method calls between threads (context switching), which is obviously optimal for 1 thread. Measuring Actor libraries on it is dangerous: if I write a single threaded Actor library, it will be the fastest of this test, but it certainly is not what you want to use as Actor library.
Let’s see now how Scala fares compared to the Plain Java solution, using computation:
Machine
Algorithm
Time for 100000 ring count, 10 mc, 4 messages
Time for 10000 ring count, 100 mc, 4 messages
Core2Duo
OptimizedRing 2 Threads
57s
37s
Core2Duo
OptimizedRing 4 Threads
78s
39s
Core2Duo
Scala Actors
82s
47s
Core2Duo
SimpleRing (100 Threads)
137s
58s
Core2Duo
OptimizedRing 1 Thread
89s
71s
Core2Quad
OptimizedRing 4 Threads
81s
25s
Core2Quad
Scala Actors
71s
30s
Core2Quad
OptimizedRing 2 Threads
61s
43s
Core2Quad
OptimizedRing 1 Threads
100s
80s
The Core2Duo is Intel(R) Core(TM)2 Duo CPU T7250 @ 2.00GHz
The Core2Quad is Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
It is interesting to compare results of 4 threads on a biprocessor with monte carlo count of 10 and 100. We see a much higher thread overhead with fewer computation. With too few computation in monte carlo, the overhead of threads is too high over 2 concurrent threads. This explains why the very simple threading architecture fares much better in the last column compared to the previous one.
Scala Actors fares much better when it is not hindered in the creation of too many threads. It seem actually very good at abstracting multithreading intricacies, while still providing near Java performance in the real world where each actor does enough computation and multithreading is important.
A while ago, I had a comment from someone implying I knew nothing about OO programming because I had not mentioned (and therefore read) Object Oriented Analysis And Design with Applications from G. Booch. I was intrigued by such a silly comment and decided to look at this book that was considered as the bible of OOP.
Well, I don’t find it that good! But I don’t find the bible particularly good either. I like B. Meyer Object Oriented Software Construction book much more, because it is more practical, more in touch with realities while pointing at the real important problems like:
Real systems have no top
In contrast G Booch book has too much evident concepts that don’t really make you learn anything or think things a different way. It is a good book for someone who is learning OO for the first time. It covers the subject in details, but I did not find anything in it that made me say
wow!
It is like most other book you can find on OO. Furthermore only the first parts are on OO, the rest is more a UML tutorial.
I wrote my previous post too fast. I found a very simple change that increases the speed x6!
The idea is too process messages in a ThreadPoolExecutor. As my Nodes are Runnable, I just needed to initialize a common ThreadPoolExecutor, and in a sendMessage, execute the runnable each time.
Here is the full code:
publicclass OptimizedRing {
private ExecutorService executor;
publicstaticvoid main(String[] args) throws Exception { OptimizedRing ring = new OptimizedRing(); RingNode node = ring.startRing(Integer.parseInt(args[0])); node.sendMessage(new StartMessage()); }
private Timer startTimer() { Timer timer = new Timer(); new Thread(timer).start(); return timer; }
private RingNode[] spawnNodes(int n, final Timer timer) { System.out.println("constructing nodes"); long start = System.currentTimeMillis(); executor = Executors.newFixedThreadPool(4); RingNode[] nodes = new RingNode[n+1]; for (int i = 0; i < n ; i++) { nodes[i] = new RingNode(i, timer, null); } long end = System.currentTimeMillis(); System.out.println("Took "+(end-start)+"ms to construct "+n+" nodes"); return nodes; }
privatevoid connectNodes(int n, RingNode[] nodes) { System.out.println("connecting nodes"); nodes[n] = nodes[0]; for (int i=0; i<n; i++) { nodes[i].connect(nodes[i+1]); } }
publicvoid sendMessage(Message m) { //we don't need to change this implementation as timer is rarely called queue.add(m); }
publicvoid run() { while (true) { Message m; try { m = queue.take(); if (m instanceof StartMessage) { startTime = System.currentTimeMillis(); timing = true; } elseif (m instanceof StopMessage) { long end = System.currentTimeMillis(); System.out.println("Start="+startTime+" Stop="+end+" Elapsed="+(end-startTime)); timing = false; } elseif (m instanceof CancelMessage) { break; } } catch (InterruptedException e) { e.printStackTrace(); } } } } }
Code
Spawn
Send 100M messages
Scala Actors
15ms
270104ms
SimpleRing
11ms
493073ms
OptimizedRing (4 threads)
6ms
84727ms
OptimizedRing (5+ threads)
5ms
62593ms
OptimizedRing (1 thread)
5ms
60660ms
I finally saw my 4 cores used! Max multithreaded throughput is achieved at 5 threads. However 1 thread is faster. Is this related to memory bandwith limit?
Now I am left wondering if actors are really that important if one can achieve much higher throughput using plain Java and very simple concepts (BlockingQueue, ThreadPoolExecutor). Worse, this test is actually faster with only 1 thread...
Alex Miller has a very interesting test of Actors. He finds out Scala performance is relatively low compared to Erlang, and Kilim is very near Erlang. But Kilim code is the most difficult to read in the lot.
I thought it would be simple to just do the same test in plain Java. I wrote the code for it duplicating the scala logic using Threads instead of Actors.
I was a bit surprised by the result. It was slow and only 1 thread was really active at one time. This is why the test is particularly good. It is not trivial to reproduce the functionality in plain Java in an effective manner. It really shows how the concept of Actors can be useful.
We saw in a previous entry how one has to be careful with Double.NaN. Today we will see how regular double can cause problems. By the way the NaN issue was not Java specific and this issue is also general in different programming languages.
A coworker was shocked that in Java (I was a bit surprised he saw that only today, but it is true it can be surprising that such a simple thing does not work as expected):
408.16 - 40.82 = 367.34000000000003
In C, this would lead to the same result. This is all due to the binary represention of double numbers. Using the formula 2^(exponent)*1.mantissa where mantissa is on 52 bits, we have
We round to 52 bits, the mantissa is 0x9828F5C28F5C3 = 2676827028518339.
As a decimal, the internal value is (2676827028518339/2^52+1) * 256 = 408.1600000000000250111042987555265426635742
40.82 decomposition:
exponent = 32. Then 40.82/32= 1.275625 = 1 + 0x468F5C28F5C28F5C...*2^-52
Rounded to 52 bits, the mantissa is 0x468F5C28F5C29 = 1241304647293993
As a decimal, the internal value is (1241304647293993/2^52 + 1)*32 = 40.8200000000000002842170943040400743484497
The difference in decimal becomes 367.34000000000002472... which becomes 367.34000000000003 when represented in binary (to convince yourself you can apply the same technique).
The Solution
One solution to this problem is to use java.math.BigDecimal which stores a number as 2 integers, one for the digits, one for the exponent power of 10 (and not 2). The correct code would become: BigDecimal value = BigDecimal.valueOf(408.16).subtract(BigDecimal.valueOf(40.82));
value would then be 367.34.
But BigDecimal has also many potential for bugs. For example, you should never use the constructor taking a double but always the one taking a String.
new BigDecimal(408.16) = 408.16000000000002501110429875552654266357421875
This is because of the binary representation of 408.16 as a double. 408.16 is only an approximation of 408.16!
Another trick with BigDecimal is not to use equals(Object) but compareTo(Object) because 408.160 is not equal to 408.16 using equals.
Why Could not They Make it Work With Double?
If you were too lazy to follow the steps of the explanation. There is a simpler explanation. Imagine the representation of a number in base 3 with 2 "digits". Let's imagine 1/3 is represented as 0.1 (this is a very simple number representation) 1/3+1/3+1/3 becomes 0.1+0.1+0.1 = 1.0 (in base 3) = 1.0 if we convert to base 10. Now in base 10, 1/3 can only be represented as 0.3, so 1/3+1/3+1/3 = 0.3+0.3+0.3 = 0.9 <> 1.0. So BigDecimal is only interesting to handle ... decimals! In the enterprise world, this should be most apps. It is a bit sad it appeared so late in the JDK. It should really be a primitive type.
My brother just sent me a funny quote. I don’t know if it is true or not:
entwickeln Sie lieber überzeugende Lösungen anstatt viele Stunden mit Coding zu verbringen? Ist Ihnen die Produktivität Ihres Teams wichtig?
Mark Driver, VP Research von Gartner, kommentierte kürzlich
“Here’s a simple equation. In terms of mental fortitude…
1 Smalltalk developer = 2.5 C++ developers
1 C++ developer = 1.5 Java developers”.
You don’t need german to understand. Of course it can not be true. How can anyone measure mental fortitude? And how does it related with productivity is another issue.
If you want a VCS that is written in C++, go play with Monotone. Really.They use a “real database”. They use “nice object-oriented libraries”. They use “nice C++ abstractions”. And quite frankly, as a result of all these design decisions that sound so appealing to some CS people, the end result is a horrible and unmaintainable mess.
This however is quite interesting since I am probably not the only one to have seen the disastrous effects of too enthusiastic abstraction professionals in Java. This is why many Java projects are utter crap. But again, not everybody makes this mistake. Some people know how to build good things. As the result of Java being popular, we have many crap programmers with pseudo genius theories you don’t find in C or Haskell.
On another subject, I seriously wonder why we don’t have more distributed compilation in Java as in C/C++. I am tired of seing those core doing nothing while compiling.