## Tag Archives: java

A couple of years back give or take, I resolved getting serious with my dream of becoming a guitar god. So I wrote an app that would help me practice playing/jamming. When I finally resolved to write Museic in Java, it occurred to me that the nature of the app makes it a good mobile app project. However, a few factors like my machine specs then, kept me to a hacky desktop app.

Fast forward to today, I’m a better guitarist but no close to being Jason Mraz or Ed Sheeran. However, I’d like to think that the capability of my personal machine has increased in greater proportion than my guitar playing skill. So it seemed to me that, at last, it is time to tackle that good mobile app project.

And tackle it I did! Despite a couple more setbacks that I won’t go into here, it is with pride that I present the first mobile app I exclusively wrote myself. And whereas most of my projects just live at GitHub, often not in a polished state, and being “deployed” on my local machine at most, this time I have something that definitely screams “finished product”: a page in the app store.

This is incidentally the first time I tried using the built-in page screenshot tool in Chrome. Can’t say I’m very impressed.

Born out of my own frustrations playing the guitar, making this felt awesome in more ways than one. Of course it lessens my frustration when learning a piece on the guitar. And then there’s that fleeting high when you create something that was not there before, get something to work when you previously had no idea how. It is not perfect, not very pretty, but I am very pleased with myself for making this. Working in the industry, it is not every day when you can personally help yourself with a program you wrote, something you do not because you’ll get in trouble otherwise, and when it happens Big Data can go hang. I managed to help myself and improve my quality of guitar practice without having to set-up TensorFlow. I feel like 90% of start-ups can’t even say that. Wink wink cringe.

I have a few more things I’d like to achieve with Museician, though I also feel that I’m more likely to just get this to a really-polished state, maybe add just one more feature (if it does not take too long), and then probably jump to the next thing I want to learn. The bad thing about working in Software Engineering today is that there is always a lot to learn. The good thing about working in Software Engineering today is that you need to be constantly learning.

Following up from my previous post, I have decided to use Java for Museic since, ironically, it looks easier to write platform-portable code that plays MP3 files in Java than in Python. And well it is but that is not to say that the road to my goal was bump-free.

First off, the “default” way to play MP3 files in Java would be to use the JavaFX library. However, JavaFX is not at all available in Java 6 (you need to include specific JARs in your class path to use it), only half-included in Java 7 (you still need to be specific with your class path), and fully-included only in Java 8.

The obvious way to stop worrying at all about the version of Java installed in your system is to use a dependency manager, my choice being Maven. Maven has a JavaFX plugin. All you need to do is (1) fix your class path by issuing the command mvn com.zenjava:javafx-maven-plugin:2.0:fix-classpath and (2) add the following in your pom.xml

<plugin> <groupId>com.zenjava</groupId> <artifactId>javafx-maven-plugin</artifactId> <version>2.0</version> <configuration> <mainClass>[put your application main class here]</mainClass> </configuration> </plugin>

Alas, I wish it was that easy. In my system, it was complaining about some missing JARs. It seemed to me that I was encountering the very problems I hoped to avoid by using Maven.

Then I realized that I was using OpenJDK 7 instead of Oracle JDK 7. Now, I’m aware of the distinction between OpenJDK and Oracle JDK; at Chikka, we’ve made it a point to use Oracle JDK in production. Nonetheless, for my personal projects, I’ve never experienced any significant difference of OpenJDK so I kept using it as it took less effort to install in Ubuntu. Until now.

The moral of the story is: if you are going to use JavaFX, make sure you are using Oracle JDK.

(And yes, Museic is now playing MP3 files with my desired delay, thank you very much. It’s now my practice/jam buddy!)

One of Java’s biggest advantage against C is its support for strings. The String class is so well-integrated into the language (it is part of the java.lang package) that newbies easily mistake it for a primitive type; in a typical Java introductory course, you’ll almost always find someone complaining why "Hello World" == "Hello World" returns false. One of it’s most useful features is how it allows concatenation using the + operator. Very intuitive.

However, this convenience comes with a price, as I will discuss in this post. Also, we’ll look at one way to mitigate this computational overhead.

Java’s Ruse
Tell me, what’s wrong/inefficient with the following lines of code?

1 2 3 4 5 6 7 8 String[] names = {"Homer", "Lenny", "Karl", "Barney"}; String greetEveryone = "Hello";   for(int i = 0, limit = names.length; i < limit; i++){ greetEveryone += " " + names[i]; }   System.out.println(greetEveryone);

See nothing wrong? As I noted above, Java’s support of concatenation through the + operator is very intuitive. So intuitive it has led a lot of people (me guilty!) to overlook the String specification of the Java API. And I quote the JDK API 7 specification1:

Strings are constant; their values cannot be changed after they are created. …String objects are immutable…

Strings are immutable. Unlike incrementing the index variable i, which stores the incremented value to the same block of memory as the original value, concatenating a String actually creates a new object (which is quite a costly operation). The pointer of greetEveryone is just updated to the newly created object so that, come line 8, you get to greet the whole gang.

Java actually sees line 5 as something like,

String space = " "; String withName = space.concat(names[i]); String updatedGreeting = greetEveryone.concat(withName); greetEveryone = updatedGreeting;

It creates three objects just so it can update one variable! And at every iteration of the loop, you create more Strings floating in memory. Putting the variable declarations outside the loop does not help either because, as I said, Strings are immutable and every assignment to an already-declared String variable creates a new String object.

Can’t we do better? Fortunately, Java already has a solution to this costly operation: StringBuilder and its thread-safe brother StringBuffer

And I quote from StringBuilder’s API 7 specification2

(StringBuilder is) [a] mutable sequence of characters.

The principal operations on a StringBuilder are the append and insert methods, which are overloaded so as to accept data of any type.

With StringBuilder, we can make Java read our code above as follows:

1 2 3 4 5 6 7 8 9 10 String[] names = {"Homer", "Lenny", "Karl", "Barney"}; StringBuilder greetEveryone = new StringBuilder("Hello");   for(int i = 0, limit = names.length; i < limit; i++){ String space = " "; String withName = space.concat(names[i]); greetEveryone.append(withName); }   System.out.println(greetEveryone.toString());

But, that doesn’t seem to do that much…
For some of your typical uses of Strings—System.out.printing a debug trace or creating UI labels, for instance—using a StringBuilder might be overkill; debug traces may be disabled without affecting the program and UI labels are rarely concatenated with something else. But some Java operations rely heavily on Strings that a few extra lines declaring StringBuilders and calling their toString() later on is a worthy investment. IBM developerWorks3 note a throughput increase of 2.3x using StringBuffer over plain-old concatenation (which, again, does not seem much unless you read the rest of this post).

An instance where you’ll be very thankful for StringBuilder and StringBuffer is when you work with JDBC. While most of your queries will be pre-determined and so hardcoded, every now and then you will most likely have to generate a dynamic query (based on, say, user input). Querying a large database is already costly even without the expense of generating temporary objects for String concatenation. Suddenly, the meager 2.3x throughput increase seems a very generous deal. And heck it is.

Looking for more ways to optimize Java code? Read Peter Sestoft’s paper on Java performance. For the numerical algorithms geek, he also has a paper on numeric performance in C, C#, and Java.

Note: My thesis is done. However, there are still some things which I find of interest to address here. Due to how our thesis was (mis)handled, I’m only finding the time to post about these things now.

Last time, I told you how we are porting all our Matlab/Octave code to Java. I wrote about a huge difference between Matlab and Octave’s implementation of rgb2gray. However, as it became apparent to me, my issues with rgb2gray are not quite done yet.

First a little background. Our project relies heavily on the Fast Fourier Transform, a numerical algorithm whose products I bet you encounter everyday though you probably are not aware of it. The 2D FFT is built-in with Octave, as expected for a language meant specifically for mathematical processing. The same, however, cannot be said for Java.

It’d be crazy for me to attempt to code it from scratch due to a number of reasons not the least of which is the fact that I don’t really have a solid training with numerical algorithms. So, after browsing through various libraries, we decided to use JTransforms, largely because we can see the source code for ourselves and it is natively in Java, in contrast to, say, FFTW, which has Java bindings but is natively in C. Plus, it does not have power-of-2 constraints, a common trait among various implementations of the FFT.

Now, the thing is, there seems to be quite a considerable offset between the FFT results of Octave1 and JTransforms; the kind of offset which you’d be so tempted to chalk up to floating-point handling differences though some part of your reason tells you it is too large to be so. The offset I was seeing ranged from 2 to 12 integer values.

As I admitted earlier, I don’t have any solid grounding with numerical algorithms. The way I see it, there are a number of possible culprits on why I’m not getting the results I should get. The FFT, after all, is a blanket term for a family of algorithms which all do the same thing, namely, apply the Fourier Transform to a discrete collection of values (i.e., vectors or matrices).

What I’ve Tried (Or, the scientific probable reason on why there is this offset, as discussed by my adviser)

Warning: Some equations ahead. I wouldn’t be delving too much into the equations and you should be able to understand my main point with just some basic algebra. But just so you know.

Consider this tutorial on the FFT.

It defines the 2D FFT as:

$F[k,l] = \frac{1}{\sqrt{MN}}\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}f[m,n]e^{-2\pi i(\frac{mk}{M}+\frac{nl}{N})}$ $f[m,n]= \frac{1}{\sqrt{MN}}\sum_{l=0}^{N-1}\sum_{k=0}^{M-1}F[k,l]e^{2\pi i(\frac{mk}{M}+\frac{nl}{N})}$ $0 \leq m,k \leq M-1, 0 \leq n,l \leq N-1$

where M and N is the number of samples in the x and y direction (or, for our particular problem, the dimensions of the image).

While it may seem interesting to find mutual recursion here, what my adviser told me to note is the constant at the beginning of the equations, $\frac{1}{\sqrt{MN}}$. According to him, depending on the implementation of the FFT we are using (remember what I told you above that Fast Fourier Transform is just a blanket term for a family of algorithms), the constant varies to either $\frac{1}{\sqrt{MN}}$ or $\frac{1}{MN}$. It’s now just a matter of scaling the samples (the pixel values) by the appropriate constant to get the desired results.

The tutorial I linked to presents the following sample signal to demonstrate the FFT:

which should evaluate to the following real and imaginary component respectively

The following code listing directly translates the above example to Octave code:

x_real = zeros(8, 8); x_real(2,3) = 70; x_real(2,4) = 80; x_real(2,5) = 90; x_real(3,3) = 90; x_real(3,4) = 100; x_real(3,5) = 110; x_real(4,3) = 110; x_real(4,4) = 120; x_real(4,5) = 130; x_real(5,3) = 130; x_real(5,4) = 140; x_real(5,5) = 150 x_fft = fft2(x_real); x_r = real(x_fft) x_i = imag(x_fft)

Which results to the following output:

x_real =   0 0 0 0 0 0 0 0 0 0 70 80 90 0 0 0 0 0 90 100 110 0 0 0 0 0 110 120 130 0 0 0 0 0 130 140 150 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   x_r =   1.3200e+03 -7.9113e+02 8.0000e+01 -1.6887e+02 4.4000e+02 -1.6887e+02 8.0000e+01 -7.9113e+02 -5.0485e+02 -9.0711e+01 2.2142e+02 1.0586e+02 -1.6828e+02 1.2721e+01 -2.6142e+02 6.8527e+02 1.2000e+02 0.0000e+00 -4.0000e+01 -2.3431e+01 4.0000e+01 0.0000e+00 4.0000e+01 -1.3657e+02 -3.3515e+02 1.3414e+02 2.1421e+01 5.0711e+01 -1.1172e+02 3.4731e+01 -6.1421e+01 2.6728e+02 1.2000e+02 -6.8284e+01 0.0000e+00 -1.1716e+01 4.0000e+01 -1.1716e+01 0.0000e+00 -6.8284e+01 -3.3515e+02 2.6728e+02 -6.1421e+01 3.4731e+01 -1.1172e+02 5.0711e+01 2.1421e+01 1.3414e+02 1.2000e+02 -1.3657e+02 4.0000e+01 0.0000e+00 4.0000e+01 -2.3431e+01 -4.0000e+01 0.0000e+00 -5.0485e+02 6.8527e+02 -2.6142e+02 1.2721e+01 -1.6828e+02 1.0586e+02 2.2142e+02 -9.0711e+01   x_i =   0.00000 -711.12698 440.00000 88.87302 0.00000 -88.87302 -440.00000 711.12698 -724.26407 713.55339 -216.56854 55.56349 -241.42136 134.14214 120.00000 158.99495 120.00000 -136.56854 40.00000 0.00000 40.00000 -23.43146 -40.00000 0.00000 -124.26407 255.56349 -120.00000 -6.44661 -41.42136 38.99495 103.43146 -105.85786 0.00000 -68.28427 40.00000 11.71573 0.00000 -11.71573 -40.00000 68.28427 124.26407 105.85786 -103.43146 -38.99495 41.42136 6.44661 120.00000 -255.56349 -120.00000 -0.00000 40.00000 23.43146 -40.00000 -0.00000 -40.00000 136.56854 724.26407 -158.99495 -120.00000 -134.14214 241.42136 -55.56349 216.56854 -713.55339

As evident, the results are very far from what our source says. However, dividing the sample (matrix x_r) by 8 (cf sample size) produces the results as given by our source.

Add the following line before calling fft2

x_real /= 8;

And we get,

  x_r =   165.00000 -98.89087 10.00000 -21.10913 55.00000 -21.10913 10.00000 -98.89087 -63.10660 -11.33883 27.67767 13.23223 -21.03553 1.59010 -32.67767 85.65864 15.00000 0.00000 -5.00000 -2.92893 5.00000 0.00000 5.00000 -17.07107 -41.89340 16.76777 2.67767 6.33883 -13.96447 4.34136 -7.67767 33.40990 15.00000 -8.53553 0.00000 -1.46447 5.00000 -1.46447 0.00000 -8.53553 -41.89340 33.40990 -7.67767 4.34136 -13.96447 6.33883 2.67767 16.76777 15.00000 -17.07107 5.00000 0.00000 5.00000 -2.92893 -5.00000 0.00000 -63.10660 85.65864 -32.67767 1.59010 -21.03553 13.23223 27.67767 -11.33883   x_i =   0.00000 -88.89087 55.00000 11.10913 0.00000 -11.10913 -55.00000 88.89087 -90.53301 89.19417 -27.07107 6.94544 -30.17767 16.76777 15.00000 19.87437 15.00000 -17.07107 5.00000 0.00000 5.00000 -2.92893 -5.00000 0.00000 -15.53301 31.94544 -15.00000 -0.80583 -5.17767 4.87437 12.92893 -13.23223 0.00000 -8.53553 5.00000 1.46447 0.00000 -1.46447 -5.00000 8.53553 15.53301 13.23223 -12.92893 -4.87437 5.17767 0.80583 15.00000 -31.94544 -15.00000 -0.00000 5.00000 2.92893 -5.00000 -0.00000 -5.00000 17.07107 90.53301 -19.87437 -15.00000 -16.76777 30.17767 -6.94544 27.07107 -89.19417

Which, at last, agrees with our source.

The Blooper (or, the actual and embarassing reason why JTransforms FFT is not coinciding with Octave FFT)

In my first post about rgb2gray, I mentioned that I was taking the floor of the average of the RGB components of a pixel (or, in Java code, just perform integer division as the resulting precision loss is tantamount to floor). Big mistake. Octave does not merely floor values. It rounds off values.

In code, so there be no misunderstandings,

public static int[][] rgb2gray(BufferedImage bi) { int heightLimit = bi.getHeight(); int widthLimit = bi.getWidth(); int[][] converted = new int[heightLimit][widthLimit];   for (int height = 0; height < heightLimit; height++) { for (int width = 0; width < widthLimit; width++) { Color c = new Color(bi.getRGB(width, height) & 0x00ffffff); float rgbSum = c.getRed() + c.getGreen() + c.getBlue(); float ave = (rgbSum / 3f); float diff = ave % 1f; int iave = diff < 0.5f ? (int) ave : (int) (ave + 1); converted[height][width] = iave; } }   return converted; }

The moral of this long story? If two supposedly-similar functions are returning different outputs, check that the inputs are exactly the same before jumping to any wild conclusions.

1. I’d like to note here that, according to the help file, Octave’s FFT is based on FFTW []

Due to the nature of the algorithms we are testing for our thesis, we had to “prototype” the procedures in Matlab so that we can easily modify parameters and test variables. However, since Matlab is expensive and we are a university that does not tolerate piracy ;), we used GNU Octave, a FOSS equivalent of Matlab (think Mono for C#).

We are done with the algorithm-prototyping part and we are now porting our Matlab code to Java, since this thesis is meant to be used by scientists, with a GUI and all that comes with standard software.  A big part of this task is in coding the functions that are built-in in Matlab; remember that Matlab is meant specially for mathematical purposes (it is a portmanteau of Matrix Laboratory) while Java is more general purpose, closer to metal, if you will.

For the past few days, I’ve been trying to implement the Matlab function rgb2gray which, as the name suggests, converts an RGB-colored image to grayscale. Now, there are a lot of ways to convert an image to grayscale but getting a grayscale isn’t the main point here. Getting it the way Matlab/Octave does is essential so that we can recreate in Java the recognition accuracy we achieved in Octave. We will be manipulating these pixel values after all.

So, I looked into Matlab’s documentation of rgb2gray and found that, for a given RGB pixel, it gets the the corresponding grayscale value by the following weighted sum:

0.2989 * R + 0.5870 * G + 0.1140 * B

(Or something close to those constants/giving the same priority over the RGB components. That is, green most weighted, followed by red, and then blue. This priority reflects the sensitivity of the human eye to these colors. See Luma (video).)

I then ran some tests on Octave to verify the docs:

octave3.2:1> four = imread("four.JPG"); octave3.2:2> four(1,1,1) # The red component of the first pixel of four.JPG ans = 159 octave3.2:3> four(1,1,2) # The green component of the first pixel of four.JPG ans = 125 octave3.2:4> four(1,1,3) # The blue component of the first pixel of four.JPG ans = 64 octave3.2:5> grayval = 0.2989 * 159 + 0.5870 * 125 + 0.1140 * 64 grayval = 128.20

So, the grayscale equivalent of the first pixel of four.JPG will have the value floor(128.20)=128. Sure enough, when I encoded the procedure in Java, the first pixel of the grayscaled four.JPG has the value 127—close enough taking into account the possible differences in how Java and Octave handle floating point computation.

But wait, there’s more…

octave3.2:6> fourgray = rgb2gray(four); octave3.2:7> fourgray(1,1) ans = 116

The value of the first pixel of four.JPG after rgb2gray is 116! Now that’s something no amount of discrepancy in floating-point handling can account for. Besides, hasn’t Octave itself computed a value close to Java’s 127 when done manually?

That’s when I realized that Octave may not be an exact port/clone of Matlab after all. I decided to Google “rgb2gray octave” and, sure enough, the documentation of Octave at SourceForge points to a departure from Matlab’s implementation:

Function File: gray = rgb2gray (rgb)

If the input is an RGB image, the conversion to a gray image is computed as the mean value of the color channels.

And verifying the docs…

octave3.2:8> floor((159 + 125 + 64)/3) ans = 116

Problem solved.

I’m pretty sure that this isn’t the only difference between Matlab and Octave. The next time I encounter another one, I’ll try to document it here, time permitting.

BONUS: My encounters with Octave so far gives credence to this but I have yet to verify this thoroughly. It seems that Matlab/Octave loops through matrices (arrays of at least two dimensions, in Java/C-speak) column-major order. This isn’t exactly difficult to do in Java/C but Java/C programmers are more used to traversing multidimensional arrays in row-major order, since this should result to less page faults and therefore faster code. Still, for some computations, the order with which you traverse a matrix matters a lot. Be careful there.