Category: Java

  • Let’s Encrypt with Tomcat 7

    Using HTTPS on Tomcat with a let’s encrypt certificate is quite easy – as soon as you know how to do it (as usual). acme.sh provides a quite convenient way of getting and renewing certificates. This is extremely important as the certificates have a lifetime of just 60 days.

    So get and “install” acme.sh first! And make sure Tomcat is running on port 80. Then start getting your certificate:

    (more…)

  • How to run Tomcat on Port 80

    A standard Tomcat installation starts the webserver on port 8080 – which is usually not the desired behavior. In order to change the server to port 80 there are two options which I outline in the following:
    (more…)

  • What to do in case of org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved attributes

    I’m currently gathering my first experiences with Apache Spark and in particular Spark SQL.

    While I was playing a bit with Spark SQL Joins I suddenly faced an exception like Exception in thread "main" org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved attributes: foo.
    Followed by the parsed SQL statement etc …

    Well, in MySQL the error message would have been
    "Unknown column 'foo' in field list"
    Aka: You are accessing a column/field foo where this field does not exist.
    I was already a bit too close to the problem in order to see it at once – and I only found descriptions dealing with nested structures etc (which wasn’t the case in my situation). So it took me a couple of minutes to realize what Spark want to tell me.

    Maybe this helps someone else, too.

  • How to ignore Maven build erros due to JavaDoc with Java 8

    Java 8 is a bit more strict in JavaDoc parsing. This can lead to build failures in Maven when building the repo with warnings like:

    Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.7:jar (attach-javadocs) on project [projectname]: MavenReportException: Error while creating archive:
    Exit code: 1 - [path-to-file]:[linenumber]: warning: no description for @param

    Sure, the good solution would be to fix the JavaDocs. But in cases where you just clone a foreign repo, you probably just want to get it run and not start fixing it.

    To ignore the erros, just turn off doclint by adding the following <configuration> tag to your pom.xml:

    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-javadoc-plugin</artifactId>
        <version>2.10.2</version>
        <executions>
            <execution>
                <id>attach-javadocs</id>
                <goals>
                    <goal>jar</goal>
                </goals>
                <configuration> <!-- add this to disable checking -->
                    <additionalparam>-Xdoclint:none</additionalparam>
                </configuration>
            </execution>
        </executions>
    </plugin>
    

    Some more solutions can be found in this StackOverflow thread.

  • How to get List of Objects from deeper level in Json via GSON

    Sometimes you get a quite nested Json response but the only thing you need is a list of classes in a certain branch of the Json document (like a response of Yahoo’s YQL query).

    Assume just the following json document:

    {
    "fieldA": {
        "fieldB": {
            "fields": [
                { "foo": "test1", "bar": "test2"},
                { "foo": "test11", "bar": "test22"}
             ]
          }
       }
    }
    

    And the only thing you need is the fields array.
    A Java8 way to get the fields as a list would be:

    List<FooBar> quotes2 = Stream.of(gson.fromJson(json, JsonObject.class)
    	.getAsJsonObject("foo")
    	.getAsJsonObject("bar")
    	.getAsJsonArray("foobar"))
    	.flatMap(e -> Stream.of(gson.fromJson(e, FooBar[].class)))
    	.collect(Collectors.toList());
    

    But that’s quite some code. Okay if you only need it once, but as soon as you need this several times it clearly violates the DRY principle. Gson (which I am using a lot) doesn’t seem to provide a simple way for doing this. Except creating the whole hierarchy as Java Classes, which might just be overkill.

    Solving the problem in a more generic way is the way to go – but it als requires creating generic arrays:

    class Gsons{
        public static <T> List<T> asList(String json, String path, Class<T> clazz) {
            Gson gson = new Gson();
            String[] paths = path.split("\\.");
            JsonObject o = gson.fromJson(json, JsonObject.class);
            for (int i = 0; i < paths.length - 1; i++) {
                o = o.getAsJsonObject(paths[i]);
            }
            JsonArray jsonArray = o.getAsJsonArray(paths[paths.length - 1]);
            Class<T[]> clazzArray = (Class<T[]>) ((T[]) Array.newInstance(clazz, 0)).getClass();
            T[] objectArray = gson.fromJson(jsonArray, clazzArray);
            return Arrays.asList(objectArray);
        }
    }
    

    The only things to do are creating a class for the entities and calling the method:

    List<FooBar> fooBars = Gsons.asList(json, "fieldA.fieldB.fields", FooBar.class);
    
  • How to (re)schedule an alarm after an App upgrade in Android

    In one of my Apps I am using alarms to schedule notifications.
    Of course I also want to (re)schedule the alarm when the device is rebooted. Easy: Just set a BOOT_COMPLETED action in the intent-filter of the according schedule reciever:

    <receiver android:name=".AlarmScheduleReceiver" android:enabled="true">
    <intent-filter>
    <action android:name="android.intent.action.BOOT_COMPLETED" />
    <category android:name="android.intent.category.DEFAULT" />
    </intent-filter>
    </receiver>
    

    The problem now just is that when the app is upgraded, your alarm will not be rescheduled! Not too much of a problem – if you know it! Just add another action into the intent-filter:

    <action android:name="android.intent.action.PACKAGE_REPLACED" />
    

    I was really lucky that a friend pointed that out when I added that feature to my app! Figuring this out just by getting user complaints that “the alarm sometimes doesn’t work” would not have been very funny!

    I would have been pretty glad if the API docs would mention something like “hey, when you listen for BOOT_COMPLETE, you might consider listening for PACKAGE_REPLACED, too”. Well, that’s life.

  • Java 8 Streams: Collecting items into a Map of (Key, Item)

    Once in a while I come across the task where I have a list of Items that I want to filter and afterwards store in a map. Usually the key is a property of the Item: anItem.name -> anItem

    In the usual Java way this looked like the following:

    Map<String, Stuff> map = new HashMap<>();
    for (Stuff s : list) {
        if (!s.name.equals("a")){
            map.put(s.name, s);
        }
    }
    

    Nothing really special, but it somehow doesn’t look too nice. Yesterday I thought: in Scala I would emit tuples and call .toMap. Isn’t that also possible with Java 8 Streams? And indeed it is:

    Map<String, Item> map = l.stream()
        .filter(s -> !s.name.equals("a"))
        .collect(toMap(s -> s.name, s -> s)); // toMap is a static import of Collectors.toMap(...)
    

    This looks compact and readable!

    If you don’t like s -> s, just use identity() function of the Function class. Actually I do not like static imports very much as as they make the code less readable, but in this case I would decide for static imports.

    Map<String, Item> map = l.stream()
        .filter(s -> !s.name.equals("a"))
        .collect(toMap(s -> s.name, identity())); // toMap and identity are imported statically
    
  • Windows Tomcat start failed command 127.0.0.1 could not be found

    I just installed Tomcat 7 on my Windows machine and tried to fire it up through Netbeans. But instead of a running server, I just got an error message that command 127.0.0.1 could not be found (Localized error message: Der Befehl “127.0.0.1” ist entweder falsch geschrieben oder konnte nicht gefunden werden.).

    I remember that I read about it in a Tomcat bugtracker (but can’t find it any more). Well the solution is pretty simple:
    Just open [tomcat home]\bin\catalina.bat and remove the “-characters from lines 196 and 201 (in the code snippet below it’s line 1 and 6):

    set JAVA_OPTS=%JAVA_OPTS% %LOGGING_CONFIG%
    
    if not "%LOGGING_MANAGER%" == "" goto noJuliManager
    set LOGGING_MANAGER=-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
    :noJuliManager
    set JAVA_OPTS=%JAVA_OPTS% %LOGGING_MANAGER%
    
    
  • Enable MySQL Streaming in Cascading / Scalding

    Last week I ran into a an ugly problem of Scalding:
    I needed to read a really large table from MySQL to process it in a certain job. In generall this is trivial: just use a JDBC Source, select your columns and that’s it.

    Usually we do this by using 1-3 parallel connections to the SQL-server. This time I started running out of memory because scalding didn’t (more precicely: couldn’t) swap/spill to disk. The problem here is the default behaviour of the mysql-connector. The api docs says:

    By default, ResultSets are completely retrieved and stored in memory. In most cases this is the most efficient way to operate, and due to the design of the MySQL network protocol is easier to implement. If you are working with ResultSets that have a large number of rows or large values, and can not allocate heap space in your JVM for the memory required, you can tell the driver to stream the results back one row at a time.

    So, what does this mean: If you query a 10 GB table, you get all the data and the connector tries to buffer it in memory – which is a bad idea if you just want to process tuple by tuple. You can then split this large query into 10 smaller ones: SELECT ... FROM ... LIMIT 0, x, SELECT ... FROM ... LIMIT x+1, y, … etc. This works – but partitioning a large result this way is not very efficient because starting from the second query, MySQL has to iterate over x rows until it can start gathering and returning results. So you partition the big query into 10 smaller results but you put quite a lot of load to the server. And over all you still have to keep a lot of results in RAM.

    (more…)

  • Compiling Cascading: FAILURE: Build failed with an exception.

    Today I ran into a really stupid error message when I tried to recompile cascading-jdbc:

    Evaluating root project ‘cascading-jdbc’ using build file ‘/home/…/cascading-jdbc/build.gradle’.

    FAILURE: Build failed with an exception.

    * Where:
    Build file ‘/home/…/cascading-jdbc/build.gradle’ line: 68

    * What went wrong:
    A problem occurred evaluating root project ‘cascading-jdbc’.
    > Could not find method create() for arguments [fatJarPrepareFiles, class eu.appsatori.gradle.fatjar.tasks.PrepareFiles] on task set.

    * Try:
    Run with –stacktrace option to get the stack trace. Run with –debug option to get more log output.

    BUILD FAILED

    Total time: 5.355 secs

    Solution

    Check your gradle version … I ran a brand new Ubuntu with the shipped gradle version 1.4. Well the cascading readme states that gradle 1.8 is required … and it really is.