Unix tools introduced. Today: tmux

With tmux you can share a shell session between different users.

It provides an easy way to work together in a bash shell from different locations.

It is as easy as:

User 1

$ ssh localhost
$ tmux

User 2

$ ssh localhost
$ tmux attach

Now both users are in a shared terminal environment and can input/output to the same terminal.

More useful example can be found here:

https://wiki.ubuntuusers.de/tmux/

Run Gitlab CI locally

I use this docker-based approach.

0. Create a git repo to test this

mkdir my-git-project
cd my-git-project
git init
git commit --allow-empty -m"Initialize repo to showcase gitlab-runner locally."

1. Go to your git directory

cd my-git-project

2. Create a .gitlab-ci.yml

Example .gitlab-ci.yml

image: alpine

test:
  script:
    - echo "Hello Gitlab-Runner"

3. Create a docker container with your project dir mounted

docker run -d \
  --name gitlab-runner \
  --restart always \
  -v $PWD:$PWD \
  -v /var/run/docker.sock:/var/run/docker.sock \
  gitlab/gitlab-runner:latest

4. Execute with

docker exec -it -w $PWD gitlab-runner gitlab-runner exec docker test

5. Prints

...
Executing "step_script" stage of the job script
$ echo "Hello Gitlab-Runner"
Hello Gitlab-Runner
Job succeeded
...

Note: The runner will only work on the commited state of your code base. Uncommited changes will be ignored. Exception: The .gitlab-ci.yml itself must not be commited to be taken into account.

Packaging a Command Line Java App for Linux

How to create a java command line tool that is (1) easy to install (2) as small as possible (3) and does not interfere with a previously installed jvm on the host?

Here is my take

  1.  Create an executable ‘fat jar’
  2.  Create a minimal jvm to run the fat jar
  3.  Define a proper version number
  4.  Package everything together to a .deb package
  5.  Provide the .deb package via an online repository

All snippes were taken from https://github.com/jschnasse/oi 

The oi command line app is a very simple conversion tool to transform structured formats from one into another.

Create an executable ‘fat jar’

I use the maven-assembly-plugin for this. Here is the relevant section from my pom.xml.

<plugin>
        <artifactId>maven-assembly-plugin</artifactId>
        <executions>
          <execution>
            <phase>package</phase>
            <goals>
              <goal>single</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <finalName>oi</finalName>
          <descriptorRefs>
            <descriptorRef>jar-with-dependencies</descriptorRef>
          </descriptorRefs>
          <archive>
            <manifest>
              <mainClass>org.schnasse.oi.main.Main</mainClass>
            </manifest>
            <manifestEntries>
              <Automatic-Module-Name>org.schnasse.oi</Automatic-Module-Name>
            </manifestEntries>
          </archive>
          <appendAssemblyId>false</appendAssemblyId>
        </configuration>
      </plugin>

The most important configuration entry is the path to the <mainClass> . The entry points to a java class that must define a main method.

It also is important to define a fixed <finalName>. We don’t want to create artifacts with version numbers in it. The versioning is done elsewhere. Our build process should just spit out an executable at a predictable location.

The mvn package command will now create a fat jar under target/oi.jar.

Create a minimal jvm to run the ‘fat jar’

The created jar can be executed as java -jar target/oi.jar. This is already an important milestone since you can now use the app on your own development pc. To make it a bit handier put the actual call into a script and copy it to /usr/bin/oi in order to make it accessible for all users on the development machine. Also you can provide the oi.jar at a more global location, e.g. /usr/lib.

This could be the content of /usr/bin/oi

java -jar /usr/lib/oi.jar $@

Use $@ to pass parameters from command line to the actual java app.

More on this will be explained in the ‘Package everything together’ section.

The next step is to make the program executable on other machines. Since the application depends on the existence of the java interpreter we have to find a way to either ship java  together with our little oi tool or to ask the user/user’s computer to install it in advance.

Both approaches are feasible. I decided to ship java together with my tool for the following reasons (1) The tool should be as self contained as possible (2) The installation of the tool should not interfere with other java based packages. (3) I want to be free to update to new jvm versions at my own speed, therefore I want  support only one single jvm version at every state of development.

Today java distributions come with a tool named jlink. The jlinktool can be used to create minimal jvms. This will look like:

jlink \
    --add-modules java.base,java.naming,java.xml \
    --verbose \
    --strip-debug \
    --compress=1 \
    --no-header-files \
    --no-man-pages \
    --output /opt/jvm_for_oi

The result is a minimal jvm only containing the modules java.base,java.naming,java.xml under /opt/jvm_for_oi. The idea is now to provide this jvm together with our app. But to become a bit more independent from the configuration of my  development machine I want to guarantee that my tool is served always with a well defined jvm version and not just with the version I have installed at my development machine. To create a well defined build environment I will use docker. With docker I can create a minimal jvm on the basis of a predefined openJDK version. And here is how it works.

1. Based on the code above we can create a file named Dockerfile.build to create the jvm based on the openJdk-12.0.1_12.

FROM adoptopenjdk/openjdk12:jdk-12.0.1_12
RUN jlink \
    --add-modules java.base,java.naming,java.xml \
    --verbose \
    --strip-debug \
    --compress 2 \
    --no-header-files \
    --no-man-pages \
    --output /opt/jvm_for_oi

We will use this docker definition just to create the jvm and copy it to our development environment. The docker image can be deleted directly afterwards.

docker build -t adopt_jdk_image -f Dockerfile.build .
docker create --name adopt_jdk_container adopt_jdk_image
docker cp adopt_jdk_container:/opt/jvm_for_oi /usr/share/jvm_for_oi
docker rm adopt_jdk_container

The resulting jvm can be found under /usr/share/jvm_for_oi.

This again is a very important milestone. You can now edit your startscript at /usr/bin/oi and use the generated jvm instead of your preinstalled java version. This will make the execution of the app independent of the globally installed java version and therefor more reliable.

/usr/share/jvm_for_oi/bin/java -jar /usr/lib/oi.jar $@

In my project configuration the inclusion of the minimal jvm increases the size of the .deb package by ~10MB. On the target system the jvm takes ~45MB extra space. In my former setup I configured openJDK-11 as dependency in the Debian package which consumes roughly ~80MB of extra space if newly installed.

Define a proper version number

Since oi is a java app built with maven I use the typical semantic versioning scheme which consists of three numbers (1) a  major, (2) a minor, (3 ) and a patch number divided by dots. Example given, a version of ‘0.1.4’ reads as follows:

0 – No major version. There is no stable version yet. Development is still at an early stage.

1 – First minor version. This is software at an very early stage. Usually minor versions are compatible to the recent major release. Since no major version exists this software has no reliable behavior yet.

4 – There were four patches released for the first minor version. A patch is typically a bug fix that does not change the

The process of creating  a new version is done as the following. (1) Define the next Version in a variable oi_version stored in a file VERSIONS. (2) Use a script bumpVersions.sh to  update the version numbers in several files like README, manpage, etc. (3) Commit files that were updated with the new version number to git. (4) Use the mvn-gitflow plugin to create new versions for the actual source and to push everything in a well defined manner to github.

<plugin>
  <groupId>com.amashchenko.maven.plugin</groupId>
  <artifactId>gitflow-maven-plugin</artifactId>
  <version>1.7.0</version>
  <configuration>
    <gitFlowConfig>
        <developmentBranch>master</developmentBranch>
    </gitFlowConfig>
  </configuration>
</plugin>

The gitflow-maven-plugin supports the command mvn gitflow:release . The command does the following:

1. Define a new release number

2. Update the pom.xml in the development branch accordingly

3. Push the updated pom.xml to the mainline branch

4. Create a tag on mainline

5. Update the release number in the development branch to a new SNAPSHOT release.

6. Push the updated pom.xml to the development branch.

The plugin was originally created with for the `gitflow` branching approach. Since my project uses the github-flow-branching approach which does not foresee a development branch besides of the mainline I defined master as development branch.

Package everything together

At this point a new release of the sourcecode is online at github. Now, it’s time to create the binary release. The binary release will be a .deb file containing the newly packaged fat-jar together with the minimal jvm. (5) A build.sh script is used to create the .deb artifact.

#! /bin/bash

scriptdir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd $scriptdir
source VERSIONS
mvnparam=$1

function build_oi(){
 package_name=$1
 package_version=$2
 package=${package_name}_$package_version
 mkdir -p deb/$package/usr/lib
 mkdir -p deb/$package/usr/bin
 mkdir -p deb/$package/usr/share/man/man1/
 mvn package -D$mvnparam
 sudo cp src/main/resources/$package_name deb/$package/usr/bin
 sudo cp target/$package_name.jar deb/$package/usr/lib

docker build -t adopt_jdk_image -f Dockerfile.build .
docker create --name adopt_jdk_container adopt_jdk_image
docker cp adopt_jdk_container:/opt/jvm_for_oi deb/$package/usr/share/jvm_for_oi
docker rm adopt_jdk_container

ln -s ../share/jvm_for_oi/bin/java deb/$package/usr/bin/jvm_for_oi 

}

function build(){
 package_name=$1
 package_version=$2
 package=${package_name}_$package_version

 if [ -d $scriptdir/man/$package_name ]
 then
   cd $scriptdir/man/$package_name
   asciidoctor -b manpage man.adoc
   cd -
   sudo cp $scriptdir/man/$package_name/$package_name.1 deb/$package/usr/share/man/man1/
 fi  
 dpkg-deb --build deb/$package
}

build_oi oi $oi_version

What you can see from the listing is that the script creates a directory structure in accordance to the .deb package format. It also generates (1) the fat-jar, (2) the minimal jvm (3) a man page and (4) binds it all together with a dpkg-deb -build command

Provide the .deb package via an online repository

(6) The .deb artifact is then uploaded to a bintray repo using again a shell script push_to_bintray.sh.

#! /bin/bash

scriptdir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
source VERSIONS

function push_to_bintray(){
cd $scriptdir
PACKAGE=$1
VERSION=$2
API_AUTH=$3
subject=jschnasse
repo=debian
filepath=${PACKAGE}_${VERSION}.deb
curl -u$API_AUTH -XPOST "https://bintray.com/api/v1/packages/$subject/$repo/" -d@bintray/${PACKAGE}/package.json -H"content-type:application/json"
curl -u$API_AUTH -XPOST "https://bintray.com/api/v1/packages/$subject/$repo/$PACKAGE/versions" -d@bintray/${PACKAGE}/version.json -H"content-type:application/json"
curl -u$API_AUTH -T deb/$filepath "https://bintray.com/api/v1/content/$subject/$repo/$PACKAGE/$VERSION/$filepath;deb_distribution=buster;deb_component=main;deb_architecture=all;publish=1;override=1;"
curl -u$API_AUTH -XPUT "https://bintray.com/api/ui/artifact/$subject/$repo/$filepath" -d'{"list_in_downloads":true}' -H"content-type:application/json"
cd -
}
apiauth=$1
push_to_bintray oi $oi_version $apiauth
push_to_bintray lscsv $lscsv_version $apiauth
push_to_bintray libprocname $libprocname_version $apiauth

The script makes use of a set of prepared json files to provide metadata for the  package.

(7) The last step is now to visit the github wegpage an navigate to the tag that has been created at step (4). By adding a release name it will become visible as release at the landing page of the git repo.

Step 6 seems the most critical step since it updates the debian repo and makes the new version available to everyone. In between step 5 and step 6 some sort of testing should happen to ensure that the artifact is installable and does execute as expected. My plan is to utilize a set of docker files to test releases. A first attempt can be found here.

Fazit

The process of versioning consists of multiple steps. Most of the work can be automated. A semi automated process can be developed with little effort. To automate the whole process it is crucial to provide well thought tests in between the steps and to define fallback points. This adds some extra safety to the objective but also introduces extra complexity. For future jdk versions it could be beneficial to use jpackager instead of jlink.

 

 

Docker Security

There are four major areas to consider when reviewing Docker security:

  • the intrinsic security of the kernel and its support for namespaces and cgroups;

  • the attack surface of the Docker daemon itself;

  • loopholes in the container configuration profile, either by default, or when customized by users.

  • the “hardening” security features of the kernel and how they interact with containers.

https://docs.docker.com/engine/security/security/

Bash hacks. Structured directory listings.

$ lscsv -l /etc/profile |oi -t yaml -i csv --header "type,perm,hlinks,user,group,size,modified,name"
---
data:
- group: "root"
  hlinks: "1"
  modified: "Sep 16  2019"
  name: "/etc/profile"
  perm: "rw-r--r--"
  size: "902"
  type: "-"
  user: "root"

With `lscsv` (gist)

and `oi` (java)

wget https://schnasse.org/deb/oi_0.0.1.deb
sudo apt install ./oi_0.0.1.deb

 

Unix tools introduced. Today: cat

cat is a well known command to concatenate the content of multiple files.  Example: cat file1 file2 file3

But there are other use cases. cat offers a nice way to print out multi line strings.  It is even possible to include variables into the string, which feels a little bit like using a templating language.

Example:

NAME=ADMIN@COMPANY.COM;
cat <<EOF
Hello $LOGNAME,
please be aware. This system will be under maintenance soon.
Have a good day.
Sincerely
$NAME
EOF

For more info on the <<EOF visit this SO-Thread

 

 

Command Line Tools: The growth of options

I found this table here: https://danluu.com/cli-complexity/

command 1979 1996 2015 2017
ls 11 42 58 58
rm 3 7 11 12
mkdir 0 4 6 7
mv 0 9 13 14
cp 0 18 30 32
cat 1 12 12 12
pwd 0 2 4 4
chmod 0 6 9 9
echo 1 4 5 5
man 5 16 39 40
which 0 1 1
sudo 0 23 25
tar 12 53 134 139
touch 1 9 11 11
clear 0 0 0
find 14 57 82 82
ln 0 11 15 16
ps 4 22 85 85
ping 12 12 29
kill 1 3 3 3
ifconfig 16 25 25
chown 0 6 15 15
grep 11 22 45 45
tail 1 7 12 13
df 0 10 17 18
top 6 12 14

Unix tools introduced. Today: rsync

rsync is a very cool tool that can be used to copy files between hosts or between directories on the same host. Like the term ‘sync’ suggests the copy process can be controlled into great detail to modulate rsync’s behavior.  Take a look at the available options under: https://linux.die.net/man/1/rsync

This is my list of cool options.  I start with the most basic usage. The following command can be used to copy, and later on sync two directories.

rsync -avn /source/dir /target/dir  

The command ‘archives’ file attributes (-a) and displays some status info (-v).

In the given form, the command only does a dry-run (-n). To execute the command remove the -n.

The command uses the short form of --archive (-a) which translates to (-rlptgoD).

  • -r – recursive copy
  • -l – copy symlinks as symlinks
  • -p – set target permissions to be the same as the source
  • -t – set target mtime to be the same as the source. Use this to support fast incremental updates based on mtime.
  • -g – set target group to be the same as the source
  • -o – set target owner to be the same as the source
  • -D – if remote user is superuser this recreates devices and other special files.

More cool options

Move

--remove-source-files This will remove copied files from source.

Update

--update This forces rsync to skip any files which exist on the destination and have a modified time that is newer than the source file.

Delete

--delete Delete files on target that do not exist in source tree.

Backup

--backup Make a backup of modified or removed files on target.

--backup-dir=date +%Y.%m.%d Specify a backup dir on target.

What to copy?

--min-size=1 Do not copy empty files. This can be particularly interesting if you have corrupted files in the source.

--max-size=100K Copy only small files. Can be used to handle small and large files differently.

--existing Only override files that already exist on the target. Do not create new files on target.

--ignore-existing Only copy files that do not exist on target.

--exclude-from Define excludes in a file.

Scheduling, Bandwidth and Performance

--time-limit Ends rsync after a certain time limit.

--stop-at=y-m-dTh:m Ends rsync at a specific time.

--partial Allows partial copies in case of interruptions.

--bwlimit=100 Limits bandwidth Specify KBytes/second. Good option if transfer of large files is required.

Output

  • -h output numbers in a human-readable format.
  • --progress display progress.
  • -i log change info.
  • --log-file= define a log file.
  • --quiet no output.
  •  -v Output status info. You can add more ‘v’.
  • Forgot to log any progress info? Use the following command to see what rsync is about to do.
     ls -l /proc/$(pidof rsync)/fd/*