Wednesday, April 1, 2020

Back to third grade with SSH or how to setup ~/.ssh/authorized_keys

Very often I am asked to SSH to a machine just to hit access denied. A few roundtrips then needed until issue is resolved. Here are my commands to get it working from the first time.

# mkdir .ssh
# vi .ssh/authorized_keys # add user's public key here
# chown -R user.user .ssh
# chmod 700 .ssh
# chmod 600 .ssh/authorized_keys
# restorecon -R .ssh

Last one is for SELinux enabled distributions.

Hope you find useful.

Thursday, November 21, 2019

Using authenticated proxy with Selenium / Packaging Chrome extensions with Ruby

Overview

Recently I've got the request to implement authenticated proxy support for our product test framework. The problem is that recent browsers do not allow the widely popular http://username:password@proxy.example.com syntax and still ask you to manually enter credentials.

The next problem is that Selenium does not let you interact with these basic auth dialogs [1][2]. So how should one go about this?

Chrome allows you to do this with a custom extension that you can insert with selenium/watir.


One additional complication is that we can use a different proxy server each time. Thus extension needs to be packaged on the fly.

Chrome extension

This is the proxy extension as I use it. See it as an example for whatever you'll be trying to do. It consists of only 2 files you can put in an empty directory.

manifest.json

{
    "version": "0.0.1",
    "manifest_version": 2,
    "name": "Authenticated Proxy",
    "permissions": [
        "<all_urls>",
        "proxy",
        "unlimitedStorage",
        "webRequest",
        "webRequestBlocking",
        "storage",
        "tabs"
    ],
    "background": {
        "scripts": ["background.js"]
    },
    "minimum_chrome_version":"23.0.0"
}

background.js.erb

var config = {
  mode: "fixed_servers",
  rules: {
    singleProxy: {
      scheme: "<%= proxy_proto %>",
      host: "<%= proxy_host %>",
      port: parseInt(<%= proxy_port %>)
    },
    bypassList: <%= proxy_bypass.split(/[ ,]/).delete_if(&:empty?).to_json %>
  }
};

chrome.proxy.settings.set({value: config, scope: "regular"}, function() {});

function callbackFn(details) {
  return {
    authCredentials: {
      username: "<%= proxy_user %>",
      password: "<%= proxy_pass %>"
    }
  };
}

chrome.webRequest.onAuthRequired.addListener(
  callbackFn,
  {urls: ["<all_urls>"]},
  ['blocking']
);

Protocol Buffers

As you can see on the web site, Protocol Buffers is a method of serializing structured data. For CRX3 (unlike CRX2) format it is part of the required header for the extension.

I decided to use ruby-protobuf project instead of the google ruby library because it appeared well maintained and pure ruby. I assume google ruby library will work well too.

The Packager

 A CRX v3 file would consist of:
  • Cr24 - ASCII 8bit magic string
  • 3 - protocol version in unsigned 32bit little endian
  • header length in bytes in unsigned 32bit little endian
  • header itself - the protobuf serialized object
    • crx3.proto - the protobuf descriptor
    • as a rule of thumb
      •  all lengths inside are given as unsigned 32bit little-endian integers
      • key files are inserted in PKCS#8 binary encoding (Ruby's key.to_der worked fine)
  • ZIP archive of the extension files

Generating protobuf stub

We need to install Google protobuf compiler protoc. You can save the protocol file in a directory where you want stub to live in. Then generate by

protoc --plugin=protoc-gen-ruby-protobuf=`ls ~/bin/protoc-gen-ruby` --ruby-protobuf_out=./ path/chrome_crx3/crx3.proto
This will create a file crx3.pb.rb in the same directory as the protocol file. All you need is to require 'path/crx3.pb.rb' wherever you want to use that format.

Actual packager

At this point the packager is straightforward to implement. Pasting the whole logic here.

We have one ::zip method to generate a ZIP archive in memory. If an ERB binding is provided by caller, any .erb files are processed. That's how the above background.js.erb works.

The method ::header_v3_extension generates the signature and constructs the whole file header.

Finally ::pack_extension just glues the two methods above to generate the final extension.

chrome_extension.rb

require 'erb'
require 'find'
require 'openssl'
require 'zip'

require_relative 'resource/chrome_crx3/crx3.pb.rb'

class ChromeExtension
  def self.gen_rsa_key(len=2048)
    OpenSSL::PKey::RSA.generate(len)
  end

  #  @note file format spec pointers:
  #    https://groups.google.com/a/chromium.org/d/msgid/chromium-extensions/977b9b99-2bb9-476b-992f-97a3e37bf20c%40chromium.org
  def self.header_v3_extension(data, key: nil)
    key ||= gen_rsa_key()

    digest = OpenSSL::Digest.new('sha256')
    signed_data = Crx_file::SignedData.new
    signed_data.crx_id = digest.digest(key.public_key.to_der)[0...16]
    signed_data = signed_data.encode

    signature_data = String.new(encoding: "ASCII-8BIT")
    signature_data << "CRX3 SignedData\00"
    signature_data << [ signed_data.size ].pack("V")
    signature_data << signed_data
    signature_data << data

    signature = key.sign(digest, signature_data)

    proof = Crx_file::AsymmetricKeyProof.new
    proof.public_key = key.public_key.to_der
    proof.signature = signature

    header_struct = Crx_file::CrxFileHeader.new
    header_struct.sha256_with_rsa = [proof]
    header_struct.signed_header_data = signed_data
    header_struct = header_struct.encode

    header = String.new(encoding: "ASCII-8BIT")
    header << "Cr24"
    header << [ 3 ].pack("V") # version
    header << [ header_struct.size ].pack("V")
    header << header_struct

    return header
  end

  # @param file [String] to write result to
  # @param dir [String] to read extension from
  # @param key [OpenSSL::PKey]
  # @param crxv [String] version of CRX file to create
  # @param erb_binding [Binding] optional if you want to process ERB files
  # @return undefined
  def self.pack_extension(file:, dir:, key: nil, crxv: "v3", erb_binding: nil)
    zip = zip(dir: dir, erb_binding: erb_binding)

    File.open(file, 'wb') do |io|
      io.write self.send(:"header_#{crxv}_extension", zip, key: key)
      io.write zip
    end
  end

  # @param dir [String] to read extension from
  # @param erb_binding [Binding] optional if you want to process ERB files
  # @return [String] the zip file content
  def self.zip(dir:, erb_binding: nil)
    dir_prefix_len = dir.end_with?("/") ? dir.length : dir.length + 1
    zip = StringIO.new
    zip.set_encoding "ASCII-8BIT"
    Zip::OutputStream::write_buffer(zip) do |zio|
      Find.find(dir) do |file|
        if File.file? file
          if erb_binding && file.end_with?(".erb")
            zio.put_next_entry(file[dir_prefix_len...-4])
            erb = ERB.new(File.read file)
            erb.location = file
            zio.write(erb.result(erb_binding))
            Kernel.puts erb.result(erb_binding)
          else
            zio.put_next_entry(file[dir_prefix_len..-1])
            zio.write(File.read(file))
          end
        end
      end
    end
    return zip.string
  end
end

Using the packager

Packing the extension is as simple as:
require 'chrome_extension'

ChromeExtension.pack_extension(file: "/path/of/target/extension.crx", dir: "/path/of/proxy/extension")

Using the extension with Watir

proxy_proto, proxy_user, proxy_pass, proxy_host, proxy_port = <...>
chrome_caps = Selenium::WebDriver::Remote::Capabilities.chrome()
chrome_caps.proxy = Selenium::WebDriver::Proxy.new({http: "#{proxy_proto}://#{proxy_host}:#{proxy_port}", :ssl => "#{proxy_proto}://#{proxy_host}:#{proxy_port}")
# there is a bug in Watir where providing an object here results in an error 
# options = Selenium::WebDriver::Chrome::Options.new
# options.add_extension proxy_chrome_ext_file if proxy_chrome_ext_file
options = {}
options[:extensions] = [proxy_chrome_ext_file] if proxy_chrome_ext_file
browser = Watir::Browser.new :chrome, desired_capabilities: chrome_caps, switches: chrome_switches, options: options

Bonus content - CRX2 method


  #  @note original crx2 format description https://web.archive.org/web/20180114090616/https://developer.chrome.com/extensions/crx
  def self.header_v2_extension(data, key: nil)
    key ||= gen_rsa_key()
    digest = OpenSSL::Digest.new('sha1')
    header = String.new(encoding: "ASCII-8BIT")

    # it is exactly same signature as `ssh_do_sign(data)` from net/ssh does
    signature = key.sign(digest, data)
    signature_length = signature.length
    pubkey_length = key.public_key.to_der.length

    header << "Cr24"
    header << [ 2 ].pack("V") # version
    header << [ pubkey_length ].pack("V")
    header << [ signature_length ].pack("V")
    header << key.public_key.to_der
    header << signature

    return header
  end

Credits

Monday, April 15, 2019

accessing namespaces of a docker/podman container (nsenter)

There is a nice utility `nsenter` that allows you to switch to the namespace of another process. It took me considerable time to search it out today so thought to write a short blog about it.

Now I have a Podman container (for docker just use `docker` command instead of `podman` below). I started that container by:

$ sudo podman run -t -a STDIN -a STDOUT -a STDERR --rm=true --entrypoint /bin/bash quay.io/example/image:version

And I've been running some testing on it but it turned out I want to increase limits without destroying my preparations if I exit the process. So first thing is to figure out pid namespace of my container:

$ sudo podman ps --ns
CONTAINER ID  NAMES                PID   CGROUPNS    IPC         MNT         NET         PIDNS       USERNS      UTS
a147a3a5b35f  fervent_stonebraker  1408  4026531835  4026532431  4026532429  4026532360  4026532432  4026531837  4026532430

I see different namespaces but `nsenter` requires a file name to switch to a PID namespace. SO I will use the PID information in above output.

$ sudo nsenter --pid=/proc/1408/ns/pid

The above starts a shell for me in the PID namespace of my container. Now I want to change limits. Interesting to note here is that I change pid 1 as it is the PID of my bash shell in the container:

$ sudo prlimit --rss=-1 --memlock=33554432 --pid 1

Finally verify limits in my container shell:

bash-4.2$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 23534
max locked memory       (kbytes, -l) 32768
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1048576
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 16384
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1048576
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

One interesting thing is `ps` inside namespace. If I run these two

$ ps -ef
$ sudo nsenter --pid=/proc/1408/ns/pid ps -ef

They will show exactly the same output. It is because I still have same `/proc` mounted even though my PID namespace is changed. And it is what `ps` looks at.

With `nsenter` you can switch any namespace, not only PID. I hope this is a useful short demonstration how to do fun things with linux namespaces.

Some links:
  • https://lwn.net/Articles/531114/ - namespaces overview series

Saturday, January 19, 2019

Install OKD 3.11 with source version of openshift-ansible installer

To install OpenShift by openshift-ansible from sources, one needs to build the openshift-ansible RPMs and install them as a repo on the machine performing the installation. For 3.11 in CI this is done by the following YAML.

First clone openshift-ansible repo.

$ git clone --depth=1 --branch=release-3.11 https://github.com/openshift/openshift-ansible.git

Then build base image as described in the YAML.

$ cd openshift-ansible
$ BUILDAH_LAYERS=false sudo podman build -f images/installer/Dockerfile -t ocp-ansible --layers=false .

Run the image and prepare for RPM building

$ sudo podman run -t -a STDIN -a STDOUT -a STDERR --rm=true -u root ocp-ansible /bin/bash
# yum install tito createrepo
# git clone https://github.com/openshift/openshift-ansible.git --depth=1 --branch=release-3.11
# git config --add user.email myemail@example.com
# git config --add user.name myname

Build RPMs as pointed in the rpm building section of the YAML with slight modifications. In bold I write things that differ.

# tito tag --offline --no-auto-changelog
# tito build --output="_output/local/releases" --rpm --test --offline --quiet
# createrepo _output/local/releases/noarch

Now RPM repo is under `_output/local/releases/noarch/`.  Copy it to a web server or locally on the machine where you would run the installation. Then create a file /etc/yum.repos.d/my-ocp-ansible.conf:

[tmp-openshift-ansible]
baseurl = <file:// or http:// url of RPM repo>
enabled = 1
gpgcheck = 0
name = Custom built OpenShift Ansible repo

Finally perform the installation as described in the official docs.

$ ansible-playbook ....

Make sure that you see your RPMs in the install log under `List all openshift ansible packages`.

Thursday, January 10, 2019

Building debug firefox build from source RPM on Red Hat Enterprise Linux

In short:
  • Create an account on https://access.redhat.com.
  • Get Red Hat Enterprise Linux (RHEL)
    • Download and install RHEL server on a local physical or virtual machine (it is free with developer subscription).
    • Or spawn a RHEL machine in some cloud service.
    • Important:  you will need a large machine. For me 4GB failed [*] and I used a 16GB one. I didn't check what is the minimum required.
  • If you installed your own RHEL, then you need to subscribe the machine.
    • subscription-manager register # use your access.redhat.com credentials
    • subscription-manager attach
      • if the above does not work automatically try the below
      • subscription-manager list --available
      • subscription-manager attach --pool=<whatever you find useful above>
  • sudo yum install yum-utils rpm-build
  • yumdownloader --source firefox
  • rpm -ivh firefox-*.rpm
  • sudo yum-builddep rpmbuild/SPECS/firefox.spec
    • on a vanilla system you will see missing dependencies
    • if you wanted to figure that out by yourself, you'd go to https://access.redhat.com and search for the packages to see what repos they come from (or maybe use some clever yum command that I don't know atm)
  • yum-config-manager --enable rhel-7-server-devtools-rpms rhel-7-server-optional-rpms
    • or edit /etc/yum.repos.d/redhat.repo
  • sudo yum-builddep rpmbuild/SPECS/firefox.spec # this time it will succeed
  • rpmbuild -ba --with=debug_build rpmbuild/SPECS/firefox.spec
  • find the built rpm at
    • ~/rpmbuild/RPMS/x86_64/firefox-60.4.0-1.el7.x86_64.rpm
    • ~/rpmbuild/RPMS/x86_64/firefox-debuginfo-60.4.0-1.el7.x86_64.rpm
    • ~/rpmbuild/SRPMS/firefox-60.4.0-1.el7.src.rpm

[*] it is really sad, in the past one could learn to be a developer on a budget machine. Nowadays it seems like even compiling your code takes a beefy one :/

Friday, November 23, 2018

Running Logstash container under OpenShift

What is the issue?

Main problem for running random images under OpenShift is that OpenShift starts containers as a random user. This is done for security reasons (isolation of workloads). A user can be given permissions to run `privileged` containers but this is not recommended if it can be avoided.

You can check my earlier blog about building an SSH container image for openshift for more information an a more complicated example.

Logstash official container image

Official logstash image can be found on dockerhub and is built off logstash-docker github project. It is not specifically built to run in OpenShift but it is still straightforward to run it unmodified. There are only 2 issues:
  • it tries to run as user 1000 and expects to find logstash code in user's home directory
  • some configuration files lack needed permissions to be modified by a randim user id

Get running it

Depending on what you're trying to do, you can approach in a somehow different way. I will give a specific example by mostly retaining original configuration (beats input and stdout output) but adding `config` file with Kubernetes audit setup and disabling elasticsearch monitoring as don't have an elasticsearch backend. I hope this will provide enough of an example so you can setup your instance the way you desire.

Creating configuration

To store our custom configuration files, we will create a config map with the file content.
$ cat logstash-cfgmap.yml
apiVersion: v1
data:
  logstash-wrapper.sh: |-
      set -x -e
      rm -vf "/usr/share/logstash/config/logstash.yml"
      echo "xpack.monitoring.enabled: false" > "/usr/share/logstash/config/logstash.yml"
      exec /usr/local/bin/docker-entrypoint "$@"
  config: |-
    input{
        http{
            #TODO, figure out a way to use kubeconfig file to authenticate to logstash
            #https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html#plugins-inputs-http-ssl
            port=>8888
            host=>"0.0.0.0"
        }
    }
    filter{
        split{
            # Webhook audit backend sends several events together with EventList
            # split each event here.
            field=>[items]
            # We only need event subelement, remove others.
            remove_field=>[headers, metadata, apiVersion, "@timestamp", kind, "@version", host]
        }
        mutate{
            rename => {items=>event}
        }
    }
    output{
        file{
            # Audit events from different users will be saved into different files.
            path=>"/var/log/kube-audit-%{[event][user][username]}/audit"
        }
    }
kind: ConfigMap
metadata:
  name: logstash
$ oc create -f logstash-cfgmap.yml
configmap/logstash created

With the above config map we have two files.
  • logstash-wrapper.sh - this we need to run some custom commands before we delegate back to image original entry point. Namely to remove original `logstash.yml` that lacks group write permissions. As well disable elasticsearch monitoring that is enabled by default. The write permissions are needed in case logstash image startup script notice env variables that need to be converted to configuration entries and put into it. See env2yaml.go and docker-config docs.
  • config - this file contains logstash configuration file and is a copy of what I presently see in kubernetes auditing docs.
Note that at this step you can create full Logstash configuration inside the config map together with `logstash.yml`,`log4j2.properties`, `pipelines.yml`, etc. Then we can ignore default config from image.

Creating deployment config

$ oc run logstash  --image=logstash:6.5.0 --env=LOGSTASH_HOME\=/usr/share/logstash --command=true bash -- /etc/logstash/logstash-wrapper.sh -f /etc/logstash/config
deploymentconfig.apps.openshift.io/logstash created

A few things to explain:
  • we are setting LOGSTASH_HOME environment variable to `/usr/share/logstash` because we are running as a random user thus user home directory will not work
  • we override container start command to our wrapper script
    • we add `-f  /etc/logstash/config` to point at our custom config
    • in case we wanted to put all our configuration in the config map, then we can set instead `--path.settings /etc/logstash/`
    • once pull/113 is merged, the custom startup script wrapper will not be needed, but we may still want to provide additional arguments like `-f` and `--path.settings`
 Further we need to make sure our custom configuration is mounted under  `/usr/share/logstash`
$ oc set volume --add=true --configmap-name=logstash --mount-path=/etc/logstash dc/logstash
deploymentconfig.apps.openshift.io/logstash volume updated

Finally, because our custom config wants to write under /var/log, we need to mount a volume on that path.
oc set volume --add=true --mount-path=/var/log dc/logstash

What we did is create an emptyDir volume that will go away when pod dies. If you want to persist these logs, then a Persistent Volume needs to be used instead.

Exposing logstash service to the world

First we need to create a service that will allow other project pods and Kubernetes to reach Logstash.
$ oc expose dc logstash --port=8888
service/logstash exposed
Port 8888 is what we have set as an HTTP endpoint in `config`. If you expose other ports, then you'd have to create one service per each port that you care about.

We can easily expose HTTP endpoints to the great Internet so that we can collect logs from services external of the OpenShift environments. We can also expose non-HTTP endpoints to the internet with the node port service type but there are more limitations. Or for 4.x Ingress Controller. Below see how to do with the HTTP traffic.
$ oc expose service logstash --name=logstash-http-input
route.route.openshift.io/logstash-http-input exposed

Important: Only expose secured endpoints to the Internet! In the above example the endpoint is insecure and no authentication is required. Thus somebody can DoS your Logstash service easily.

That's all.

    Friday, December 29, 2017

    Why am I a necromancer.


    Some forum zealots are bullying poor souls who answer or correct 2, 3 or 5 years old threads. Same zealots are at the same time usually scolding users that don't first search and only later ask.

    Now my question is what is the point in searching in 5 years old posts that have never been updated? If we are going to have a canonical source of truth for every question, then we would have to update those. Or if we consider old threads not interesting thus shouldn't be updated, then why don't we delete them after some time to stop polluting Internet search engine results?

    I personally find most sense to keep old threads and when there is some update, then put it in. If I reached a thread, then it had a pretty high search rating, so it is likely other users would hit that one too. Why create a new thread and make information harder to reach? Or why delete old posts that might be useful. Even outdated, they often provide necessary clues to get closer to the desired results.

    My track record so far is some 22 necromancer badges on StackOverflow so I think other people also appreciate my approach. In fact, most of my answers are to old questions that I reached by means of Internet search engines and decided to update.

    Now there is the dark side of clueless users that put useless comments in old threads. Or don't understand what has been already written (or didn't read it) and ask stupid questions [*]. The thing is that they can't be avoided unfortunately and they spam even new threads. I don't think useless bumping of old threads should be treated equally to useful updates made to old threads.

    In summary:
    • Old thread
      • useful post
        • upvote
        • clap
        • thank
        • like
        • etc.
      • useless comment/stupid question
        • downvote
        • remove post
        • send angry face
        • ban the user
        • remove account
        • report to police
        • etc.
    • Recent thread
      • useful post
        • upvote
        • clap
        • thank
        • like
        • etc.
      • useless comment/stupid question
        • downvote
        • remove post
        • send angry face
        • ban the user
        • remove account
        • report to police
        • etc.
    Happy necromancing.

    [*] I'm not immune to asking stupid question. I'm somehow exaggerating, the point being one shouldn't attack every post to an old thread regardless of its quality.