SparkJava: Getting Started

A more clear tutorial

Preliminaries: you need Java 1.8 and Maven

To get started with SparkJava, you should have, at a minimum, the following in your computing environment:

Here’s an example of sane output for a suitable computing environment (this happens to be from during September 2016 :

-bash-4.3$ java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
-bash-4.3$ mvn -version
Apache Maven 3.2.5 (NON-CANONICAL_2015-04-01T06:42:27_mockbuild; 2015-03-31T23:42:27-07:00)
Maven home: /usr/share/maven
Java version: 1.8.0_91, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.4.14-200.fc22.x86_64", arch: "amd64", family: "unix"

Here’s output where Java works, but Maven doesn’t:

Phills-MacBook-Pro:~ pconrad$ java -version
java version "1.8.0_31"
Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)
Phills-MacBook-Pro:~ pconrad$ mvn -version
-bash: mvn: command not found
Phills-MacBook-Pro:~ pconrad$ 

Since Maven is pure Java software, installing is as simple as downloading the distribution, and then making sure the appropriate bin directory is in your path. We’ll defer the details of that process for now, and leave it as an exercise to the student to figure it out—or just use a platform where it is already installed (such as CSIL.)

The short version

  1. Clone the code in this github repo:
  2. Read the and do what it says to do.
  3. Enjoy a SparkJava HelloWorld.

If you just want to get something working quickly with SparkJava, this short version is the way to go. You won’t understand as much of the Maven Magic that’s making everything work, and if it breaks you won’t know how to fix it. But you’ll be up and running quickly.

If instead, you need to understand, then the long version takes you through building everything up from scratch.

The long version

This version indicates how the code in that other github repo came to be, a little bit at a time.

We start with Maven

Like it or not, you can’t really do SparkJava without using Maven, or one of the alternatives to Maven such as Gradle, Ivy, or SBT. Plain old Ant isn’t going to cut it. Sorry. So let’s just dig in.

Maven is a build manager similar to the make utility used with C/C++, or the ant utility commonly used with Java.

Instead of a makefile or a build.xml file, Maven uses a file called pom.xml which stands for Project Object Model.

But, Maven requires a bit of getting used to. You’ll probably want to read through this Maven in 5 minutes tutorial, and work through it at least once on a separate “Hello World” type of project that does NOT involve SparkJava, just to get used to it, before you try Maven on SparkJava.

Once that’s done, you can proceed with the steps below.

A minimal pom.xml for a SparkJava project

As explained here a minimal pom.xml file for a SparkJava hello world project might start out looking like this.

    <!-- model version - always 4.0.0 for Maven 2.x POMs -->

    <!-- project coordinates - values which uniquely identify this project -->

    <!-- library dependencies -->

A few things to note:

But it is enough to have this pom.xml? Of course not. You also need at least one Java source file, e.g. this one:

import static spark.Spark.*;

public class HelloWorld {
    public static void main(String[] args) {
        get("/hello", (req, res) -> "Hello World");

And that means we have to worry about where to put that source file in a directory structure. Maven doesn’t want us to just have and pom.xml as siblings in a simple flat directory. It wants a specific directory structure. And the best way to get that directory structure is to just let Maven build it for us.

For the command:

mvn archetype:generate -DartifactId=my-app -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

You get:

|-- pom.xml
`-- src
    |-- main
    |   `-- java
    |       `-- com
    |           `-- mycompany
    |               `-- app
    |                   `--
    `-- test
        `-- java
            `-- com
                `-- mycompany
                    `-- app

So, let’s try this with a command more appropriate to our local naming conventions. NOTE: substitute your own id in place of jgaucho, e.g. jsmith, kchen, etc.

mvn archetype:generate -DgroupId=edu.ucsb.cs56.f16.helloSpark.jgaucho \ 
         -DartifactId=HelloSparkJava \
         -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

Here is the directory tree we get:

-bash-4.3$ tree
└── HelloSparkJava
    ├── pom.xml
    └── src
        ├── main
        │   └── java
        │       └── edu
        │           └── ucsb
        │               └── cs56
        │                   └── f16
        │                       └── helloSpark
        │                           └── pconrad
        │                               └──
        └── test
            └── java
                └── edu
                    └── ucsb
                        └── cs56
                            └── f16
                                └── helloSpark
                                    └── pconrad

18 directories, 3 files

The pom.xml that results is this one:

<project xmlns="" xmlns:xsi="

To it, we add the important part, namely this, as a sibling of the other dependency already there, so that now the dependencies part looks like this:


We also find that way, way down in this directory:


There is an file with these contents:

package edu.ucsb.cs56.f16.helloSpark.pconrad;

 * Hello world!
public class App 
    public static void main( String[] args )
        System.out.println( "Hello World!" );

We are going to replace that with a main program that actually runs a SparkJava webapp. Note that other than the package name edu.ucsb.cs56.f16.helloSpark.pconrad, and the name of the class (App instead of HelloWorld), this is identical to the example SparkJava webapp in the tutorial

package edu.ucsb.cs56.f16.helloSpark.pconrad;                                                      

import static spark.Spark.*;                                                                       

public class App {                                                                                 
    public static void main(String[] args) {                                                       
        get("/hello", (req, res) -> "Hello World");                                                

Now we are ready to do the maven magic. We type mvn package, which according to the Maven in 5 minutes guide, has the effect of doing all of the following “phases” of a Maven project:

But it doesn’t work, because it tries by default, to produce code that is backwards compatible with Java 1.5. And we get this error:

/cs/faculty/pconrad/sparkjava/HelloSparkJava/src/main/java/edu/ucsb/cs56/f16/helloSpark/pconrad/[7,34] lambda expressions are not supported in -source 1.5
[ERROR] (use -source 8 or higher to enable lambda expressions)

To fix this, we have to add the following bit of pom.xml magic sauce. This can go in between the <url>...</url> element and the <dependencies>...</dependencies> element:


And it appears to work! But what did it do? Let’s try tree again and see if we can figure that out:

-bash-4.3$ tree
├── pom.xml
├── src
│   ├── main
│   │   └── java
│   │       └── edu
│   │           └── ucsb
│   │               └── cs56
│   │                   └── f16
│   │                       └── helloSpark
│   │                           └── pconrad
│   │                               └──
│   └── test
│       └── java
│           └── edu
│               └── ucsb
│                   └── cs56
│                       └── f16
│                           └── helloSpark
│                               └── pconrad
│                                   └──
└── target
    ├── classes
    │   └── edu
    │       └── ucsb
    │           └── cs56
    │               └── f16
    │                   └── helloSpark
    │                       └── pconrad
    │                           └── App.class
    ├── generated-sources
    │   └── annotations
    ├── generated-test-sources
    │   └── test-annotations
    ├── HelloSparkJava-1.0-SNAPSHOT.jar
    ├── maven-archiver
    │   └──
    ├── maven-status
    │   └── maven-compiler-plugin
    │       ├── compile
    │       │   └── default-compile
    │       │       ├── createdFiles.lst
    │       │       └── inputFiles.lst
    │       └── testCompile
    │           └── default-testCompile
    │               ├── createdFiles.lst
    │               └── inputFiles.lst
    ├── surefire-reports
    │   ├── edu.ucsb.cs56.f16.helloSpark.pconrad.AppTest.txt
    │   └── TEST-edu.ucsb.cs56.f16.helloSpark.pconrad.AppTest.xml
    └── test-classes
        └── edu
            └── ucsb
                └── cs56
                    └── f16
                        └── helloSpark
                            └── pconrad
                                └── AppTest.class

44 directories, 13 files

In that big mess, you’ll see that the classes for and got put into a couple of classes and test-classes directories. Even more, there is a .jar file for our project that we can run.

So we might try running that with this command, but we run into a problem:

-bash-4.3$ java -jar target/HelloSparkJava-1.0-SNAPSHOT.jar
no main manifest attribute, in target/HelloSparkJava-1.0-SNAPSHOT.jar

The jar has no main class. Sad. So, what do we do?

Later on, we’ll come back and fix the problem of no main mainfest attribute, because honestly, there should be one. But for now, we know that the main class we want is:


Of course. You knew that right?

So, we can use this command:

java -cp target/HelloSparkJava-1.0-SNAPSHOT.jar edu.ucsb.cs56.f16.helloSpark.pconrad.App

But that gives us a different error. It turns out that Maven didn’t package up all the jars on which the project depends along with the classes in our project. So we get an error that the class spark/Request is not known:

Exception in thread "main" java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: spark/Request
	at edu.ucsb.cs56.f16.helloSpark.pconrad.App.main(

There is a recipe to tell Maven to fix both problems at once: i.e. to create a so-called uber jar (that’s really what they call it) that has all the dependencies baked in, and to specify which main to run so that the jar is executable, all in one fell swoop.

Regrettably, it looks heinous. It takes the form of another “plugin” that we put into our pom.xml, right next to the other plug in. Here’s what it looks like. Note the ` edu.ucsb.cs56.f16.helloSpark.pconrad.App` part in the middle is where we specify what the main class should be.

                <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">

But, with that in place, now all is well. Here’s what that looks like:

-bash-4.3$ java -jar target/HelloSparkJava-1.0-SNAPSHOT.jar 
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.

At this point, on whatever machine we are running on (e.g. if we go to port 4567, /hello, we’ll see this web server running, e.g.:

web server running on

Getting it running on Heroku

To get this to run on Heroku, we only need to do these things:

  1. Add a one line text file called Procfile that has this line in it:
    web:    java -jar target/HelloSparkJava-1.0-SNAPSHOT.jar 
  2. Add code to set the port number from an environment variable, like this:

    • A minimalist one-liner approach. This may crash if PORT is not defined.
        Spark.port(Integer.valueOf(System.getenv("PORT"))); // needed for Heroku
    • A more robust approach:
        try {
            Spark.port(Integer.valueOf(System.getenv("PORT"))); // needed for Heroku
          } catch (Exception e) {
              System.err.println("NOTICE: using default port.  Define PORT env variable to override");
  3. Using the heroku toolbelt, type heroku create, and then git push heroku master.

This should run the pom.xml file to create the uberjar, then execute the Procfile to start the webapp running on the port specified by Heroku.

More on SparkJava: Getting Started