Maximizing Developer Effectiveness

After commitment, measurement and empowerment comes scaling.

https://martinfowler.com/articles/developer-effectiveness.html

Working environment

“When we look into these scenarios, a primary reason for the problems is that the engineering organization has neglected to provide developers with an effective working environment. While transforming, they have introduced too many new processes, too many new tools and new technologies, which has led to increased complexity and added friction in their everyday tasks.”

Micro Feedback Loops

“I recommend focusing on optimizing these loops, making them fast and simple. Measure the length of the feedback loop, the constraints, and the resulting outcome. When new tools and techniques are introduced, these metrics can clearly show the degree to which developer effectiveness is improved or at least isn’t worse.”

“The key loops I have identified are:

“It is hard to explain to management why we have to focus on such small problems. Why do we have to invest time to optimize a compile stage with a two minute runtime to instead take only 15 seconds? This might be a lot of work, perhaps requiring a system to be decoupled into independent components. It is much easier to understand optimizing something that is taking two days as something worth taking on.”

Refinement as Consideration in Code Reviews

“When people think of code reviews, they usually think in terms of an explicit step in a development team’s workflow. These days the Pre-Integration Review, carried out on a Pull Request is the most common mechanism for a code review, to the point that many people witlessly consider that not using pull requests removes all opportunities for doing code review. Such a narrow view of code reviews doesn’t just ignore a host of explicit mechanisms for review, it more importantly neglects probably the most powerful code review technique – that of perpetual refinement done by the entire team.”

https://martinfowler.com/bliki/RefinementCodeReview.html

https://martinfowler.com/bliki/PullRequest.html

TerminusDB

https://terminusdb.com/

TerminusDB is an open-source knowledge graph database that provides reliable, private & efficient revision control & collaboration. If you want to collaborate with colleagues or build data-intensive applications, nothing will make you more productive.

Principles for decentralized Web

https://blog.dshr.org/2021/02/principles-for-decentralized-web.html

The fundamental goal of the DWeb is to reduce the dominance of the giant centralized platforms, replacing it with large numbers of interoperable smaller services each implementing its own community’s policies. Inevitably, as with cryptocurrencies and social networks such as Parler, Gab, 4chan and 8chan, some of the services will be used for activities generally regarded as malign. These will present an irresistible target for PR attacks intended to destroy the DWeb brand.

Decentralized Web

https://getdweb.net/

DWeb connects the people, projects and protocols essential to building a decentralized web. A web that is more private, reliable, secure and open. A web with many winners—returning to the original vision of the World Wide Web and internet.

Datamodelling with a “Json to Rdf” Approach.

At the example of this stackoverflow question:

“Supposing we have the following triple in Turtle syntax:

<http:/example.com/Paul> <http:/example.com/running> <http:/example.com/10miles> .

How do I add a start and end time? For example if I want to say he started at 10 am and finished his 10miles run at 12 am. I want to use xsd:dateTime.”

https://stackoverflow.com/questions/49726990


Sometimes it can be hard to create good, well fitting models. In my own experience it is crucial to identify a well defined set of entities and relations to create a vocabulary from.  Some people prefer to use visual strategies to develop their models. I prefer to write models in structured text.  This has the advantage that the process of modeling directly leads into actual coding.

Here is an example on how I would tackle the question .

1. The modelling part (not much RDF involved)

{
    "runs": [
        {
            "id": "runs:0000001",
            "distance": {
                "length": 10.0,
                "unit": "mile"
            },
            "time": {
                "start": "2018-04-09T10:00:00",
                "end": "2018-04-09T12:00:00"
            },
            "runner": {
                "id": "runner:0000002",
                "name": "Paul"
            }
        }
    ]
}

We store the json document in a file run.json. From here we can use the ‘oi’ command line tool , to create an adhoc context.

oi run.json -t context

The resulting context is just a stub. But with a few additions we can easily create a context document to define id’s and types for each vocable/entity/relation.

2. The RDF part: define a proper context for your document.

   {
    "@context": {
        "ical": "http://www.w3.org/2002/12/cal/ical#",
        "xsd": "http://www.w3.org/2001/XMLSchema#",
        "runs": {
            "@id": "info:stack/49726990/runs/",
            "@container": "@list"
        },
        "distance": {
            "@id": "info:stack/49726990/distance"
        },
        "length": {
            "@id": "info:stack/49726990/length",
            "@type": "xsd:double"
        },
        "unit": {
            "@id": "info:stack/49726990/unit"
        },
        "runner": {
            "@id": "info:stack/49726990/runner/"
        },
        "name": {
            "@id": "info:stack/49726990/name"
        },
        "time": {
            "@id": "info:stack/49726990/time"
        },
        "start": {
            "@id":"ical:dtstart",
            "@type": "xsd:dateTime"
        },
        "end": {
            "@id":"ical:dtend",
            "@type": "xsd:dateTime"
        },
        "id": "@id"
    }
}

3. The fun part: Throw it to an RDF converter of your choice

This is how it looks in JSON-Playground

Or simply use ‘oi’:

 
oi run.json -f run.context -t ntriples

Prints:

 
_:b0 <info:stack/49726990/runs/> _:b3 .
_:b3 <http://www.w3.org/1999/02/22-rdf-syntax-ns#first> <info:stack/49726990/runs/0000001> .
_:b3 <http://www.w3.org/1999/02/22-rdf-syntax-ns#rest> <http://www.w3.org/1999/02/22-rdf-syntax-ns#nil> .
<info:stack/49726990/runs/0000001> <info:stack/49726990/distance> _:b1 .
<info:stack/49726990/runs/0000001> <info:stack/49726990/runner/> <info:stack/49726990/runner/0000002> .
<info:stack/49726990/runs/0000001> <info:stack/49726990/time> _:b2 .
_:b1 <info:stack/49726990/length> "1.0E1"^^<http://www.w3.org/2001/XMLSchema#double> .
_:b1 <info:stack/49726990/unit> "mile" .
<info:stack/49726990/runner/0000002> <info:stack/49726990/name> "Paul" .
_:b2 <http://www.w3.org/2002/12/cal/ical#dtend> "2018-04-09T12:00:00"^^<http://www.w3.org/2001/XMLSchema#dateTime> .
_:b2 <http://www.w3.org/2002/12/cal/ical#dtstart> "2018-04-09T10:00:00"^^<http://www.w3.org/2001/XMLSchema#dateTime> .

Run Gitlab CI locally

I use this docker-based approach.

0. Create a git repo to test this

mkdir my-git-project
cd my-git-project
git init
git commit --allow-empty -m"Initialize repo to showcase gitlab-runner locally."

1. Go to your git directory

cd my-git-project

2. Create a .gitlab-ci.yml

Example .gitlab-ci.yml

image: alpine

test:
  script:
    - echo "Hello Gitlab-Runner"

3. Create a docker container with your project dir mounted

docker run -d \
  --name gitlab-runner \
  --restart always \
  -v $PWD:$PWD \
  -v /var/run/docker.sock:/var/run/docker.sock \
  gitlab/gitlab-runner:latest

4. Execute with

docker exec -it -w $PWD gitlab-runner gitlab-runner exec docker test

5. Prints

...
Executing "step_script" stage of the job script
$ echo "Hello Gitlab-Runner"
Hello Gitlab-Runner
Job succeeded
...

Note: The runner will only work on the commited state of your code base. Uncommited changes will be ignored. Exception: The .gitlab-ci.yml itself must not be commited to be taken into account.

RDF to pretty JSON with oi 0.5.8

_:b0 a <http://schema.org/Book> ;
    <http://schema.org/name> "Semantic Web Primer (First Edition)" ;
    <http://schema.org/offers> _:b1 ;
    <http://schema.org/publisher> "Linked Data Tools" .

_:b1 a <http://schema.org/Offer> ;
    <http://schema.org/price> "2.95" ;
    <http://schema.org/priceCurrency> "USD" .

Based on this stack overflow answer I created a tool named **oi** that provides some capabilities to convert rdf to json via command line. If no frame is provided via cli, the tool aims to generate @context entries for most situations.


oi -i turtle -t json books.ttl |jq '.["@graph"][0]'

prints


{
  "@id" : "_:b0",
  "@type" : "http://schema.org/Book",
  "name" : "Semantic Web Primer (First Edition)",
  "offers" : {
    "@id" : "_:b1",
    "@type" : "http://schema.org/Offer",
    "price" : "2.95",
    "priceCurrency" : "USD"
  },
  "publisher" : "Linked Data Tools"
}

The tool attempts to create various output formats. The result is not meant to be 100% correct for each and every case. The overall idea is to provide adhoc conversions just as one step in a conversion pipeline.

The tool is available as .deb package via it’s github page at: https://github.com/jschnasse/oi.

Grundrechte für Geimpfte

“Geimpfte Menschen wären nicht besser gestellt,[…].  Sie wären normal gestellt. Grundrechte sind kein Privileg.[…]Ungleiches ungleich zu behandeln, ist eine Grundfeste des Rechtsstaates.”

Gefunden in taz, 20.01.2021 S. 06. briefe.