tag:blogger.com,1999:blog-59510255558914942932024-03-18T09:39:57.278+02:00random bits of computer infoAleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.comBlogger41125tag:blogger.com,1999:blog-5951025555891494293.post-17685695401521850472023-03-07T17:45:00.007+02:002023-03-07T17:46:50.515+02:00Reading downloaded logs from Quay.io<p><a href="https://quay.io" target="_blank">Quay</a> is the container registry service sponsored by Red Hat inc. based on the <a href="https://www.projectquay.io">projectquay.io</a> free/open source project.</p><p>It supports also building images. One issue though is that logs are downloaded in a custom JSON format.<br /></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p>A simple example is:</p><p><span style="background-color: #d9ead3;">> {"logs":[{"data":{"datetime":"2023-03-07 15:19:36.159268"},"message":"build-scheduled","type":"phase"}]}</span></p><p><br />In this short post I give you a very simple way to read those logs in terminal:<br /></p><p><span style="background-color: #d9ead3;">> jq -r '.logs[].message' < /tmp/98afc879-8ef1-4cc6-9425-cf5e77712a5f.json</span><br /></p>Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-84712925074382587152021-04-28T22:09:00.006+03:002024-01-05T22:54:04.506+02:00Updating UEFI boot record on Fedora<p>This is more of a personal note.</p><p>Basically grub-install is deemed unnecessary now. If you use it, you will break secure boot. To restore your boot record, you can do <br /></p>
<pre style="background-color: #d9ead3;"><code>sudo dnf reinstall shim-* grub2-*</code></pre>
<p>then if you also need to update your grub config, which you should not need normally</p>
<p>Fedora 33 and older:</p>
<pre style="background-color: #d9ead3;"><code>sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg</code></pre>
<p>Fedora 34 and newer:</p>
<pre style="background-color: #d9ead3;"><code>sudo grub2-mkconfig -o /boot/grub2/grub.cfg</code></pre>
<p>In case you have upgraded to Fedora 34 from an earlier version, you better use <code style="background-color: #d9ead3;">sudo rpmconf -a</code> to restore <code style="background-color: #d9ead3;">/boot/efi/EFI/fedora/grub.cfg</code> to the new default version.<br /></p>
<p><br /></p><p>While on it, I also learned about the <code style="background-color: #d9ead3;">efibootmgr</code> utility, appears interesting<strike> although I don't know what use one can have of it</strike> ¯\_(ツ)_/¯ </p><p>Actually <code style="background-color: #d9ead3;">efibootmgr</code> can help in case you have a messed up boot entry that doesn't actually boot to grub or whatever boot manager (or UKI image) you desire.<code><br /></code></p><pre style="background-color: #d9ead3;"><code># list current entries</code><br /><code>sudo efibootmgr<br /># remove existing entry<br />sudo </code><code>efibootmgr </code><code>-B -b 0 <br /># create a new entry<br /></code><code>sudo efibootmgr --disk /dev/vnme0n1 --part 1 -L Fedora -l \EFI\fedora\grubx64.efi</code><br /><code># change boot order (you see current order with the first command)<br />sudo efibootmgr -o 0000,0001,001C,001D,001E,001F,0020,0021,0022,0023,0024,0025<br /></code></pre><p></p><p>Important: when performing the fixes above, make sure to use a Fedora live image or a netinst image in recovery mode lest you mess up <code style="background-color: #d9ead3;">selinux</code> labeling and machine fails to start with <code style="background-color: #d9ead3;">Failed to mount API filesystem</code> (as it happened to me). Then you will have to boot with <code style="background-color: #d9ead3;">enforce=0</code> kernel cmdline argument, then run <code style="background-color: #d9ead3;">fixfiles relabel</code> to fix that up.<br /></p><p>Literature: <br /></p><ul style="text-align: left;"><li>Source of the information above is <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1917213#c11">bug 1917213</a>, check it for more details.</li><li><a href="https://access.redhat.com/articles/2018943" target="_blank">Understanding Grub</a></li><li><a href="https://access.redhat.com/discussions/2026183" target="_blank">parted partitioning tool</a> </li><li><a href="https://access.redhat.com/solutions/1292763" target="_blank">restore the /boot directory after deleting it </a><br /></li></ul>Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-24358243414141542752021-04-26T22:23:00.002+03:002021-04-26T22:23:29.546+03:00Rsync between volumes on two different OpenShift clusters<p>This is a short HOWTO about rsync-ing data between 2 distinct OpenShift clusters.</p>
<p>You always have the option to <code style="background-color: #d9ead3;">oc rsync</code> the data from source OpenShift cluster to your local workstation and then <code style="background-color: #d9ead3;">oc rsync</code> from your workstation to target cluster. But if you have halt a terabyte of data you may not have enough space or it may take several days because of network bandwidth limitation.</p>
<p>The method I describe below avoids any such inefficiencies as well the <code style="background-color: #d9ead3;">rsync</code> process is restarted in case some network or system glitch kills it.<br /></p>
<p>It basically works by having:</p>
<ul style="text-align: left;"><li>a kubeconfig file with access to the target OpenShift cluster inside a secret on the source OpenShift cluster</li><li>a pod on target OpenShift cluster with target volume mounted</li><li>a pod on source OpenShift cluster with source volume and kubeconfig secret mountedand an entrypoint running <code style="background-color: #d9ead3;">oc rsync</code><br /></li></ul><p> So lets start with generating a proper kubeconfig secret.</p>
<pre style="background-color: #d9ead3;"><code>$ touch /tmp/kubeconfig
$ chmod 600 /tmp/kubeconfig
$ oc login --config=/tmp/kubeconfig # make sure to use target cluster API endpoint
$ oc project my-target-cluster-namespace --config=/tmp/kubeconfig </code></pre>
Note that command below will run against source OpenShift cluster.
<pre style="background-color: #d9ead3;"><code>$ oc login # use source cluster API endpoint
$ oc create secret generic kubeconfig --from-file=config=/tmp/kubeconfig
</code></pre>
<p>I will assume that you have your target pod already running inside target cluster. Otherwise you can create one similar to the pod in source cluster below, just use some entrypoint command to keep it permanently running. For example <code style="background-color: #d9ead3;">/bin/sleep 1000000000000000000</code>.</p>
<p>Now all we need to do is run a proper pod in source cluster to do the rsync task. Here is an example pod YAML with comments to make clear what to use in your situation:</p>
<pre style="background-color: #d9ead3;"><code>apiVersion: v1
kind: Pod
metadata:
name: rsync-pod
namespace: my-namespace-on-source-cluster
spec:
containers:
# use client version ±1 of target OpenShift cluster version
- image: quay.io/openshift/origin-cli:4.6
name: rsync
command:
- "oc"
args:
- "--namespace=my-target-cluster-namespace"
- "--kubeconfig=/run/secrets/kube/config"
# insecure TLS is not recommended but is a quick hack to get you going
- "--insecure-skip-tls-verify=true"
- "rsync"
- "--compress=true"
- "--progress=true"
- "--strategy=rsync"
- "/path/to/data/dir/"
- "target-pod-name:/path/to/data/dir/"
volumeMounts:
- mountPath: /path/to/data/dir
name: source-data-volume
- mountPath: /run/secrets/kube
name: kubeconfig
readOnly: true
# restart policy will keep restarting your pod until rsync completes successfully
restartPolicy: OnFailure
terminationGracePeriodSeconds: 30
volumes:
- name: source-data-volume
persistentVolumeClaim:
claimName: source-persistant-volume-claim-name
- name: kubeconfig
secret:
defaultMode: 420
secretName: kubeconfig</code></pre>
And last needed command is to run this pod inside the source cluster:
<pre style="background-color: #d9ead3;"><code>$ oc create -f rsync-pod.yaml</code></pre>
Now check what state is your pod in:
<pre style="background-color: #d9ead3;"><code>$ oc describe pod rsync-pod</code></pre>
If it start properly, then monitor your progress:
<pre style="background-color: #d9ead3;"><code>$ oc logs -f rsync-pod</code></pre>Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-78756416225390555952020-12-04T09:58:00.003+02:002020-12-04T09:58:33.219+02:00Why Linux sucks with drivers?I just found this blog post as an unpublished drart and only one line <br /><br /><blockquote>I never liked Microsoft in particular. But no </blockquote><p>I honestly can't remember what I intended to write here. My guess is that I was frustrated with <a href="https://github.com/RadeonOpenCompute/ROCm" target="_blank">ROCm</a> and state of GPU drivers and frameworks in Linux. And situation is still quiet frustrating. <br /></p><p>But I recently saw the situation with Windows drivers and I'm now convinced Windows is no better with drivers unless you're buying latest hardware.<br /><br />Last week I've upgraded home wifi router to latest <a href="https://openwrt.org/">OpenWRT</a> with more secure settings including WPA3 optional support. And while everything else started to work better, one Windows 10 laptop started to disconnect from network very often and network performance was not enough to play Youtube videos. <br /><br />The machine a pretty descent one, HP Inspiron 15 3000 series with i7 CPU and descent amount of RAM, still a few years old. So I thought that the old Atheros/Qualcomm wifi card needs a driver update.<br /><br />What I found in HP website was from 2017 and didn't yield any better results. Then with fear I tried <a href="https://www.ath-drivers.eu">ath-drivers.eu</a> as an unofficial driver source and latest driver for the card from 2019. No luck either.</p><p>Now I had the option to configure the old router just for this laptop. But this didn't sound right and still compromises the whole local network. Then I decided to find a second hand mPCIe Wi-Fi card. Choices basically boil down to old Intel, Broadcom and Realtek models. Realtek is the one I did *not* try due to lack of reputation.<br /><br />I found a guy who had both Intel and Broadcom models. So I could take both home and see which one works better. The Intel model had drivers in windows update from 2013 only (looking at available model, the latest model mPCIe Intel I found is discontinued and has 2019 drivers). The Broadcom had 2016 or something.<br /><br />I wanted to try the Broadcom first due to Bluetooth 4.0 LE support. Some new mice and other devices only support that version. It performed well but computer crashed a few times for one day.<br /><br />Finally tried the Intel with 2013 driver. Now that worked rock solid and fast. Downside is Bluetooth 3.0 support but mouse can also be used with a receiver so I guess it should be good enough. I see Bluetooth 4 USB dongles for $5 so not a big deal to add such support if needed in the future.<br /><br />Unfortunately this card will never get WPA3 support and I have no idea whether the recent WPA2 vulnerabilities have been fixed with it somehow or not.<br /><br />In conclusion I see that for older hardware, that is not ancient, just little old but perfectly fine, Linux still gets much better support.<br /><br />I'm sorry if you didn't expect just another Windows rant with a click-bait title. Still I needed to express my frustration with the state of computing. And I don't mention Apple here, it is so closed ecosystem that no amount of polish can fix.<br /></p>Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-74990058364426359352020-04-01T16:06:00.002+03:002020-04-01T16:06:55.530+03:00Back to third grade with SSH or how to setup ~/.ssh/authorized_keys Very often I am asked to SSH to a machine just to hit access denied. A few roundtrips then needed until issue is resolved. Here are my commands to get it working from the first time.<br />
<br />
<pre style="background-color: #d9ead3;"><code># mkdir .ssh
# vi .ssh/authorized_keys # add user's public key here
# chown -R user.user .ssh
# chmod 700 .ssh
# chmod 600 .ssh/authorized_keys
# restorecon -R .ssh
</code></pre>
<br />
Last one is for SELinux enabled distributions.<br /><br />Hope you find useful.Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com1tag:blogger.com,1999:blog-5951025555891494293.post-76471027129441568022019-11-21T21:39:00.000+02:002019-12-04T01:16:54.758+02:00 Using authenticated proxy with Selenium / Packaging Chrome extensions with Ruby<h2>
Overview</h2>
Recently I've got the request to implement authenticated proxy support for our product test framework. The problem is that recent browsers do not allow the widely popular <code style="background-color: #d9ead3;">http://username:password@proxy.example.com</code> syntax and still ask you to manually enter credentials.<br />
<br />
The next problem is that Selenium does not let you interact with these basic auth dialogs <a href="https://github.com/SeleniumHQ/selenium/issues/453" target="_blank">[1]</a><a href="https://github.com/w3c/webdriver/issues/385" target="_blank">[2]</a>. So how should one go about this?<br />
<br />
Chrome allows you to do this with a custom extension that you can insert with selenium/watir.<br />
<br />
<br />
One additional complication is that we can use a different proxy server each time. Thus extension needs to be packaged on the fly.<br />
<br />
<h2>
Chrome extension</h2>
This is the proxy extension as I use it. See it as an example for whatever you'll be trying to do. It consists of only 2 files you can put in an empty directory.<br />
<br />
<h4>
manifest.json</h4>
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>{
"version": "0.0.1",
"manifest_version": 2,
"name": "Authenticated Proxy",
"permissions": [
"<all_urls>",
"proxy",
"unlimitedStorage",
"webRequest",
"webRequestBlocking",
"storage",
"tabs"
],
"background": {
"scripts": ["background.js"]
},
"minimum_chrome_version":"23.0.0"
}</code></pre>
<br />
<h4>
background.js.erb </h4>
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>var config = {
mode: "fixed_servers",
rules: {
singleProxy: {
scheme: "<%= proxy_proto %>",
host: "<%= proxy_host %>",
port: parseInt(<%= proxy_port %>)
},
bypassList: <%= proxy_bypass.split(/[ ,]/).delete_if(&:empty?).to_json %>
}
};
chrome.proxy.settings.set({value: config, scope: "regular"}, function() {});
function callbackFn(details) {
return {
authCredentials: {
username: "<%= proxy_user %>",
password: "<%= proxy_pass %>"
}
};
}
chrome.webRequest.onAuthRequired.addListener(
callbackFn,
{urls: ["<all_urls>"]},
['blocking']
);</code></pre>
<br />
<h2>
<a href="https://developers.google.com/protocol-buffers/" rel="nofollow" target="_blank">Protocol Buffers</a></h2>
As you can see on the web site, Protocol Buffers is a method of serializing structured data. For CRX3 (unlike CRX2) format it is part of the required header for the extension.<br />
<br />
I decided to use <a href="https://github.com/ruby-protobuf/protobuf">ruby-protobuf</a> project instead of the google ruby library because it appeared well maintained and pure ruby. I assume google ruby library will work well too.<br />
<h2>
The Packager</h2>
A CRX v3 file would consist of:<br />
<ul>
<li>Cr24 - ASCII 8bit magic string</li>
<li>3 - protocol version in unsigned 32bit little endian</li>
<li>header length in bytes in unsigned 32bit little endian</li>
<li>header itself - the protobuf serialized object</li>
<ul>
<li><a href="https://cs.chromium.org/chromium/src/components/crx_file/crx3.proto" rel="nofollow" target="_blank">crx3.proto</a> - the protobuf descriptor</li>
<li>as a rule of thumb</li>
<ul>
<li> all lengths inside are given as unsigned 32bit little-endian integers </li>
<li>key files are inserted in PKCS#8 binary encoding (Ruby's <code style="background-color: #d9ead3;">key.to_der</code> worked fine)</li>
</ul>
</ul>
<li>ZIP archive of the extension files</li>
</ul>
<h3>
Generating protobuf stub</h3>
We need to install Google protobuf compiler <code style="background-color: #d9ead3;">protoc</code>. You can save the protocol file in a directory where you want stub to live in. Then generate by<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>protoc --plugin=protoc-gen-ruby-protobuf=`ls ~/bin/protoc-gen-ruby` --ruby-protobuf_out=./ path/chrome_crx3/crx3.proto</code></pre>
This will create a file <code style="background-color: #d9ead3;">crx3.pb.rb</code> in the same directory as the protocol file. All you need is to <code style="background-color: #d9ead3;">require 'path/crx3.pb.rb'</code> wherever you want to use that format.<br />
<h3>
Actual packager</h3>
At this point the packager is straightforward to implement. Pasting the whole logic here.<br />
<br />
We have one <code style="background-color: #d9ead3;">::zip</code> method to generate a ZIP archive in memory. If an ERB binding is provided by caller, any <code style="background-color: #d9ead3;">.erb</code> files are processed. That's how the above <code style="background-color: #d9ead3;">background.js.erb</code> works.<br />
<br />
The method <code style="background-color: #d9ead3;">::header_v3_extension</code> generates the signature and constructs the whole file header.<br />
<br />
Finally <code style="background-color: #d9ead3;">::pack_extension</code> just glues the two methods above to generate the final extension.<br />
<h4>
chrome_extension.rb</h4>
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>require 'erb'
require 'find'
require 'openssl'
require 'zip'
<b>
require_relative 'resource/chrome_crx3/crx3.pb.rb'</b>
class ChromeExtension
def self.gen_rsa_key(len=2048)
OpenSSL::PKey::RSA.generate(len)
end
# @note file format spec pointers:
# https://groups.google.com/a/chromium.org/d/msgid/chromium-extensions/977b9b99-2bb9-476b-992f-97a3e37bf20c%40chromium.org
def self.<b>header_v3_extension(data, key: nil)</b>
key ||= gen_rsa_key()
digest = OpenSSL::Digest.new('sha256')
signed_data = Crx_file::SignedData.new
signed_data.crx_id = digest.digest(key.public_key.to_der)[0...16]
signed_data = signed_data.encode
signature_data = String.new(encoding: "ASCII-8BIT")
signature_data << "CRX3 SignedData\00"
signature_data << [ signed_data.size ].pack("V")
signature_data << signed_data
</code><code><code>signature_data << data</code>
signature = key.sign(digest, signature_data)
proof = Crx_file::AsymmetricKeyProof.new
proof.public_key = key.public_key.to_der
proof.signature = signature
header_struct = Crx_file::CrxFileHeader.new
header_struct.sha256_with_rsa = [proof]
header_struct.signed_header_data = signed_data
header_struct = header_struct.encode
header = String.new(encoding: "ASCII-8BIT")
header << "Cr24"
header << [ 3 ].pack("V") # version
header << [ header_struct.size ].pack("V")
header << header_struct
return header
end
# @param file [String] to write result to
# @param dir [String] to read extension from
# @param key [OpenSSL::PKey]
# @param crxv [String] version of CRX file to create
# @param erb_binding [Binding] optional if you want to process ERB files
# @return undefined
def self.<b>pack_extension(file:, dir:, key: nil, crxv: "v3", erb_binding: nil)</b>
zip = zip(dir: dir, erb_binding: erb_binding)
File.open(file, 'wb') do |io|
io.write self.send(:"header_#{crxv}_extension", zip, key: key)
io.write zip
end
end
# @param dir [String] to read extension from
# @param erb_binding [Binding] optional if you want to process ERB files
# @return [String] the zip file content
def self.<b>zip(dir:, erb_binding: nil)</b>
dir_prefix_len = dir.end_with?("/") ? dir.length : dir.length + 1
zip = StringIO.new
zip.set_encoding "ASCII-8BIT"
Zip::OutputStream::write_buffer(zip) do |zio|
Find.find(dir) do |file|
if File.file? file
if erb_binding && file.end_with?(".erb")
zio.put_next_entry(file[dir_prefix_len...-4])
erb = ERB.new(File.read file)
erb.location = file
zio.write(erb.result(erb_binding))
Kernel.puts erb.result(erb_binding)
else
zio.put_next_entry(file[dir_prefix_len..-1])
zio.write(File.read(file))
end
end
end
end
return zip.string
end
end</code></pre>
<h3>
Using the packager</h3>
Packing the extension is as simple as:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>require 'chrome_extension'
ChromeExtension.pack_extension(file: "/path/of/target/extension.crx", dir: "/path/of/proxy/extension")</code></pre>
<h3>
Using the extension with Watir</h3>
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>proxy_proto, proxy_user, proxy_pass, proxy_host, proxy_port = <...>
chrome_caps = Selenium::WebDriver::Remote::Capabilities.chrome()
chrome_caps.proxy = Selenium::WebDriver::Proxy.new({http: "#{proxy_proto}://#{proxy_host}:#{proxy_port}", :ssl => "#{proxy_proto}://#{proxy_host}:#{proxy_port}")
# there is a bug in Watir where providing an object here results in an error
# options = Selenium::WebDriver::Chrome::Options.new
# options.add_extension proxy_chrome_ext_file if proxy_chrome_ext_file
options = {}
options[:extensions] = [proxy_chrome_ext_file] if proxy_chrome_ext_file
browser = Watir::Browser.new :chrome, desired_capabilities: chrome_caps, switches: chrome_switches, options: options</code></pre>
<h3>
Bonus content - CRX2 method</h3>
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code> # @note original crx2 format description https://web.archive.org/web/20180114090616/https://developer.chrome.com/extensions/crx
def self.header_v2_extension(data, key: nil)
key ||= gen_rsa_key()
digest = OpenSSL::Digest.new('sha1')
header = String.new(encoding: "ASCII-8BIT")
# it is exactly same signature as `ssh_do_sign(data)` from net/ssh does
signature = key.sign(digest, data)
signature_length = signature.length
pubkey_length = key.public_key.to_der.length
header << "Cr24"
header << [ 2 ].pack("V") # version
header << [ pubkey_length ].pack("V")
header << [ signature_length ].pack("V")
header << key.public_key.to_der
header << signature
return header
end</code></pre>
<br />
<h2>
Credits</h2>
<ul>
<li><a href="https://botproxy.net/docs/how-to/setting-chromedriver-proxy-auth-with-selenium-using-python/">https://botproxy.net/docs/how-to/setting-chromedriver-proxy-auth-with-selenium-using-python/</a></li>
<ul>
<li>A nice blog post about doing this with python, unfortunately it is using CRX v2 file format.</li>
<li>You can see content of actual Chrome extension there as well. </li>
</ul>
<li><a href="https://groups.google.com/a/chromium.org/d/topic/chromium-extensions/K3YIsNL_Et4/discussion">https://groups.google.com/a/chromium.org/d/topic/chromium-extensions/K3YIsNL_Et4/discussion</a></li>
<ul>
<li>discussion about CRX file format with links to other implementations</li>
</ul>
<li><a href="https://github.com/pawliczka/CRX3-Creator">https://github.com/pawliczka/CRX3-Creator</a></li>
<ul>
<li>python implementation of extension packager</li>
</ul>
<li><a href="https://crx-checker.appspot.com/">https://crx-checker.appspot.com</a></li>
<ul>
<li>CRX verifier for both versions</li>
</ul>
</ul>
Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com2tag:blogger.com,1999:blog-5951025555891494293.post-28971557474861227722019-04-15T19:15:00.002+03:002019-04-15T19:22:27.081+03:00accessing namespaces of a docker/podman container (nsenter)There is a nice utility <code style="background-color: #d9ead3;">`nsenter`</code> that allows you to switch to the namespace of another process. It took me considerable time to <a href="https://unix.stackexchange.com/a/485516/14907" target="_blank">search it out</a> today so thought to write a short blog about it.<br />
<br />
Now I have a Podman container (for docker just use <code style="background-color: #d9ead3;">`docker`</code> command instead of <code style="background-color: #d9ead3;">`podman`</code> below). I started that container by:<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ sudo podman run -t -a STDIN -a STDOUT -a STDERR --rm=true --entrypoint /bin/bash quay.io/example/image:version</code></pre>
<br />
And I've been running some testing on it but it turned out I want to increase limits without destroying my preparations if I exit the process. So first thing is to figure out pid namespace of my container:<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ sudo podman ps --ns
CONTAINER ID NAMES PID CGROUPNS IPC MNT NET PIDNS USERNS UTS
a147a3a5b35f fervent_stonebraker 1408 4026531835 4026532431 4026532429 4026532360 4026532432 4026531837 4026532430</code></pre>
<br />
I see different namespaces but <code style="background-color: #d9ead3;">`nsenter`</code> requires a file name to switch to a PID namespace. SO I will use the PID information in above output.<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ sudo nsenter --pid=/proc/1408/ns/pid</code></pre>
<br />
The above starts a shell for me in the PID namespace of my container. Now I want to change limits. Interesting to note here is that I change pid 1 as it is the PID of my bash shell in the container:<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ sudo prlimit --rss=-1 --memlock=33554432 --pid 1</code></pre>
<br />
Finally verify limits in my container shell:<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>bash-4.2$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 23534
max locked memory (kbytes, -l) 32768
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 16384
cpu time (seconds, -t) unlimited
max user processes (-u) 1048576
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited</code></pre>
<br />
One interesting thing is <code style="background-color: #d9ead3;">`ps`</code> inside namespace. If I run these two<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ ps -ef
$ sudo nsenter --pid=/proc/1408/ns/pid ps -ef</code></pre>
<br />
They will show exactly the same output. It is because I still have same <code style="background-color: #d9ead3;">`/proc`</code> mounted even though my PID namespace is changed. And it is what <code style="background-color: #d9ead3;">`ps`</code> looks at.<br />
<br />
With <code style="background-color: #d9ead3;">`nsenter`</code> you can switch any namespace, not only PID. I hope this is a useful short demonstration how to do fun things with linux namespaces.<br />
<br />
Some links:<br />
<ul>
<li>https://lwn.net/Articles/531114/ - namespaces overview series </li>
</ul>
Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com3tag:blogger.com,1999:blog-5951025555891494293.post-10681492997186019682019-01-19T17:51:00.004+02:002019-01-19T17:52:36.784+02:00Install OKD 3.11 with source version of openshift-ansible installerTo install OpenShift by openshift-ansible from sources, one needs to build the openshift-ansible RPMs and install them as a repo on the machine performing the installation. For 3.11 in CI this is done by the following <a href="https://raw.githubusercontent.com/openshift/release/master/ci-operator/config/openshift/openshift-ansible/openshift-openshift-ansible-release-3.11.yaml">YAML</a>.<br />
<br />
First clone <a href="https://github.com/openshift/openshift-ansible">openshift-ansible repo</a>.<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ git clone --depth=1 --branch=release-3.11 https://github.com/openshift/openshift-ansible.git
</code></pre>
<br />
Then build base image as described in the YAML.<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ cd openshift-ansible
$ BUILDAH_LAYERS=false sudo podman build -f images/installer/Dockerfile -t ocp-ansible --layers=false .
</code></pre>
<br />
Run the image and prepare for RPM building<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ sudo podman run -t -a STDIN -a STDOUT -a STDERR --rm=true -u root ocp-ansible /bin/bash
# yum install tito createrepo
# git clone https://github.com/openshift/openshift-ansible.git --depth=1 --branch=release-3.11
# git config --add user.email myemail@example.com
# git config --add user.name myname
</code></pre>
<br />
Build RPMs as pointed in the rpm building section of the YAML with slight modifications. In <b>bold</b> I write things that differ.<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code># tito tag --offline <b>--no-auto-changelog</b>
# tito build --output="_output/local/releases" --rpm --test --offline --quiet
# createrepo _output/local/releases/noarch
</code></pre>
<br />
Now RPM repo is under <code style="background-color: #d9ead3;">`_output/local/releases/noarch/`</code>. Copy it to a web server or locally on the machine where you would run the installation. Then create a file <code style="background-color: #d9ead3;">/etc/yum.repos.d/my-ocp-ansible.conf</code>:<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>[tmp-openshift-ansible]
baseurl = <file:// or http:// url of RPM repo>
enabled = 1
gpgcheck = 0
name = Custom built OpenShift Ansible repo
</code></pre>
<br />
Finally perform the installation as described in the <a href="https://docs.openshift.com/container-platform/3.11/install/running_install.html">official docs</a>.<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ ansible-playbook ....
</code></pre>
<br />
Make sure that you see your RPMs in the install log under <code style="background-color: #d9ead3;">`List all openshift ansible packages`</code>.Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com1tag:blogger.com,1999:blog-5951025555891494293.post-65328223763524858412019-01-10T13:55:00.000+02:002019-01-10T19:31:42.267+02:00Building debug firefox build from source RPM on Red Hat Enterprise LinuxIn short:<br />
<ul>
<li>Create an account on <a href="https://access.redhat.com/">https://access.redhat.com</a>.</li>
<li>Get Red Hat Enterprise Linux (RHEL)</li>
<ul>
<li>Download and install RHEL server on a local physical or virtual machine (it is free with developer subscription).</li>
<li>Or spawn a RHEL machine in some cloud service.</li>
<li><b>Important:</b> you will need a large machine. For me 4GB failed [*] and I used a 16GB one. I didn't check what is the minimum required.</li>
</ul>
<li>If you installed your own RHEL, then you need to subscribe the machine.</li>
<ul>
<li>subscription-manager register # use your access.redhat.com credentials</li>
<li>subscription-manager attach</li>
<ul>
<li>if the above does not work automatically try the below </li>
<li>subscription-manager list --available</li>
<li>subscription-manager attach --pool=<whatever you find useful above></li>
</ul>
</ul>
<li>sudo yum install yum-utils rpm-build</li>
<li>yumdownloader --source firefox</li>
<li>rpm -ivh firefox-*.rpm </li>
<li>sudo yum-builddep rpmbuild/SPECS/firefox.spec</li>
<ul>
<li>on a vanilla system you will see missing dependencies</li>
<li>if you wanted to figure that out by yourself, you'd go to <a href="https://access.redhat.com/">https://access.redhat.com</a> and search for the packages to see what repos they come from (or maybe use some clever yum command that I don't know atm)</li>
</ul>
<li>yum-config-manager --enable rhel-7-server-devtools-rpms rhel-7-server-optional-rpms</li>
<ul>
<li>or edit /etc/yum.repos.d/redhat.repo</li>
</ul>
<li>sudo yum-builddep rpmbuild/SPECS/firefox.spec # this time it will succeed</li>
<li>rpmbuild -ba --with=debug_build rpmbuild/SPECS/firefox.spec </li>
<li>find the built rpm at</li>
<ul>
<li>~/rpmbuild/RPMS/x86_64/firefox-60.4.0-1.el7.x86_64.rpm</li>
<li>~/rpmbuild/RPMS/x86_64/firefox-debuginfo-60.4.0-1.el7.x86_64.rpm</li>
<li>~/rpmbuild/SRPMS/firefox-60.4.0-1.el7.src.rpm</li>
</ul>
</ul>
<br />
[*] it is really sad, in the past one could learn to be a developer on a budget machine. Nowadays it seems like even compiling your code takes a beefy one :/ <br />
<ul>
</ul>
Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-82767782262156784092018-11-23T19:53:00.003+02:002021-12-07T12:22:11.403+02:00Running Logstash container under OpenShift<h2>
What is the issue?</h2>
Main problem for running random images under OpenShift is that OpenShift starts containers as a random user. This is done for security reasons (isolation of workloads). A user can be given permissions to run `privileged` containers but this is not recommended if it can be avoided.<br />
<br />
You can check my earlier blog <a href="https://rboci.blogspot.com/2016/06/creating-docker-images-suitable-for.html" target="_blank">Creating docker images suitable for OpenShift (ssh-git image HowTo)</a> for openshift for more information and a more complicated example.<br />
<h2>
Logstash official container image</h2>
Official logstash image can be found on <a href="https://hub.docker.com/_/logstash/" target="_blank">dockerhub</a> and is built off <a href="https://github.com/elastic/logstash-docker" target="_blank">logstash-docker</a> github project. It is not specifically built to run in OpenShift but it is still straightforward to run it <b>unmodified</b>. There are only 2 issues:<br />
<ul>
<li>it tries to run as user 1000 and expects to find logstash code in user's home directory</li>
<li>some configuration files <a href="https://github.com/elastic/logstash-docker/pull/113" target="_blank">lack needed permissions</a> to be modified by a randim user id</li>
</ul>
<h2>
Get running it</h2>
Depending on what you're trying to do, you can approach in a somehow different way. I will give a specific example by mostly retaining original configuration (<a href="https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html" target="_blank">beats</a> input and stdout output) but adding `config` file with <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#use-logstash-to-collect-and-distribute-audit-events-from-webhook-backend" target="_blank">Kubernetes audit setup</a> and disabling elasticsearch monitoring as I don't have an elasticsearch backend. I hope this will provide enough of an example so you can setup your instance the way you desire.<br />
<br />
<h3>
Creating configuration</h3>
To store our custom configuration files, we will create a <a href="https://docs.okd.io/latest/dev_guide/configmaps.html" target="_blank">config map</a> with the file content.<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ cat logstash-cfgmap.yml
apiVersion: v1
data:
logstash-wrapper.sh: |-
set -x -e
rm -vf "/usr/share/logstash/config/logstash.yml"
echo "xpack.monitoring.enabled: false" > "/usr/share/logstash/config/logstash.yml"
exec /usr/local/bin/docker-entrypoint "$@"
config: |-
input{
http{
#TODO, figure out a way to use kubeconfig file to authenticate to logstash
#https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html#plugins-inputs-http-ssl
port=>8888
host=>"0.0.0.0"
}
}
filter{
split{
# Webhook audit backend sends several events together with EventList
# split each event here.
field=>[items]
# We only need event subelement, remove others.
remove_field=>[headers, metadata, apiVersion, "@timestamp", kind, "@version", host]
}
mutate{
rename => {items=>event}
}
}
output{
file{
# Audit events from different users will be saved into different files.
path=>"/var/log/kube-audit-%{[event][user][username]}/audit"
}
}
kind: ConfigMap
metadata:
name: logstash
$ oc create -f logstash-cfgmap.yml
configmap/logstash created
</pre>
<br />
With the above config map we define two files that will be available inside the container.<br />
<ul>
<li><b>logstash-wrapper.sh</b> - this we need to run some custom commands before we delegate back to image original entry point. Namely to remove original `logstash.yml` that <a href="https://github.com/elastic/logstash-docker/pull/113" target="_blank">lacks group write permissions</a>. As well disable elasticsearch monitoring that is enabled by default. The write permissions are needed in case logstash image startup script notice env variables that need to be converted to configuration entries and put into it. See <a href="https://github.com/elastic/logstash-docker/blob/6.4.3/build/logstash/env2yaml/env2yaml.go">env2yaml.go</a> and <a href="https://www.elastic.co/guide/en/logstash/current/docker-config.html">docker-config docs</a>.</li>
<li><b>config</b> - this file contains logstash configuration file and is a copy of what I presently see in <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#use-logstash-to-collect-and-distribute-audit-events-from-webhook-backend">kubernetes auditing docs</a>.</li>
</ul>
<b>Note:</b> at this step you can create full Logstash configuration inside the config map together with `logstash.yml`,`log4j2.properties`, `pipelines.yml`, etc. If you choose so, then we can ignore default configuration files from the official image.<br />
<h3>
Creating deployment config</h3>
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc run logstash --image=logstash:6.5.0 --env=LOGSTASH_HOME\=/usr/share/logstash --command=true bash -- /etc/logstash/logstash-wrapper.sh -f /etc/logstash/config
deploymentconfig.apps.openshift.io/logstash created
</pre>
<br />
A few things to explain:<br />
<ul>
<li>we are setting LOGSTASH_HOME environment variable to `/usr/share/logstash` because we are running as a random user thus user home directory will not work</li>
<li>we override container start command to our wrapper script</li>
<ul>
<li>we add `-f /etc/logstash/config` to point at our custom config</li>
<li>in case we wanted to put all our configuration in the config map, then we can <b>set instead </b>`--path.settings /etc/logstash/`</li>
<li>once <a href="https://github.com/elastic/logstash-docker/pull/113">pull/113</a> is merged, the custom startup script wrapper will not be needed, but we may still want to provide additional arguments like `-f` and `--path.settings`</li>
</ul>
</ul>
Further we need to make sure our custom configuration is mounted under `/usr/share/logstash`<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc set volume --add=true --configmap-name=logstash --mount-path=/etc/logstash dc/logstash
deploymentconfig.apps.openshift.io/logstash volume updated
</pre>
<br />
Finally, because our custom config selects <i>/var/log</i> for writing logs, we need to mount a <a href="https://docs.okd.io/latest/dev_guide/volumes.html">volume</a> at that path.<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">oc set volume --add=true --mount-path=/var/log dc/logstash
</pre>
<br />
What we did is create an <i>emptyDir</i> volume that will go away when pod dies. If you want to persist these logs, then a <a href="https://docs.okd.io/latest/dev_guide/persistent_volumes.html">Persistent Volume</a> needs to be used instead.<br />
<h3>
Exposing logstash service to the world</h3>
First we need to create a <a href="https://docs.okd.io/latest/architecture/core_concepts/pods_and_services.html">service</a> that will allow other project <a href="https://docs.okd.io/latest/architecture/core_concepts/pods_and_services.html">pods</a> and Kubernetes to reach Logstash.<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc expose dc logstash --port=8888
service/logstash exposed
</pre>
Port 8888 is what we have set as an HTTP endpoint in `config`. If you expose other ports, then you'd have to create one service per each port that you care about.<br />
<br />
We can easily expose HTTP endpoints to the great Internet so that we can collect logs from services external of the OpenShift environments. We can also expose non-HTTP endpoints to the internet with the <a href="https://docs.okd.io/3.11/dev_guide/expose_service/expose_internal_ip_nodeport.html">node port</a> service type but there are more limitations. Or for OpenShift 4.x <a href="https://docs.okd.io/latest/networking/configuring_ingress_cluster_traffic/overview-traffic.html">Ingress Controller</a> can be used for exposing non-HTTP endpoints. Below see how to do with the HTTP traffic.<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc expose service logstash --name=logstash-http-input
route.route.openshift.io/logstash-http-input exposed
</pre>
<br />
<b>Important:</b> Only expose secured endpoints to the Internet! In the above example the endpoint is insecure and no authentication is required. Thus somebody can DoS your Logstash service easily.<br />
<br />
That's all.<br />
<ul></ul>
Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-54452483429619087642017-12-29T23:11:00.000+02:002017-12-29T23:16:32.108+02:00Why am I a necromancer.<br />
Some forum zealots are bullying poor souls who answer or correct 2, 3 or 5 years old threads. Same zealots are at the same time usually scolding users that don't first search and only later ask.<br />
<br />
Now my question is what is the point in searching in 5 years old posts that have never been updated? If we are going to have a canonical source of truth for every question, then we would have to update those. Or if we consider old threads not interesting thus shouldn't be updated, then why don't we delete them after some time to stop polluting Internet search engine results?<br />
<br />
I personally find most sense to keep old threads and when there is some update, then put it in. If I reached a thread, then it had a pretty high search rating, so it is likely other users would hit that one too. Why create a new thread and make information harder to reach? Or why delete old posts that might be useful. Even outdated, they often provide necessary clues to get closer to the desired results.<br />
<br />
My track record so far is some 22 necromancer badges on StackOverflow so I think other people also appreciate my approach. In fact, most of my answers are to old questions that I reached by means of Internet search engines and decided to update.<br />
<br />
Now there is the dark side of clueless users that put useless comments in old threads. Or don't understand what has been already written (or didn't read it) and ask stupid questions [*]. The thing is that they can't be avoided unfortunately and they spam even new threads. I don't think useless bumping of old threads should be treated equally to useful updates made to old threads.<br />
<br />
In summary:<br />
<ul>
<li>Old thread</li>
<ul>
<li>useful post</li>
<ul>
<li>upvote</li>
<li>clap </li>
<li>thank</li>
<li>like</li>
<li>etc.</li>
</ul>
<li>useless comment/stupid question</li>
<ul>
<li>downvote</li>
<li>remove post </li>
<li>send angry face</li>
<li>ban the user</li>
<li>remove account</li>
<li>report to police </li>
<li>etc.</li>
</ul>
</ul>
<li>Recent thread</li>
<ul>
<li>useful post</li>
<ul>
<li>upvote</li>
<li>clap </li>
<li>thank</li>
<li>like</li>
<li>etc.</li>
</ul>
<li>useless comment/stupid question</li>
<ul>
<li>downvote</li>
<li>remove post </li>
<li>send angry face</li>
<li>ban the user</li>
<li>remove account</li>
<li>report to police </li>
<li>etc.</li>
</ul>
</ul>
</ul>
Happy necromancing.<br />
<br />
[*] I'm not immune to asking stupid question. I'm somehow exaggerating, the point being one shouldn't attack every post to an old thread regardless of its quality.Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com1tag:blogger.com,1999:blog-5951025555891494293.post-9742060434986913052017-12-20T11:04:00.002+02:002018-01-10T14:38:22.186+02:00Debugging input devicesHaving troubles with input devices like mice, touchpads and keyboards or even cameras is hard to debug. Usually one is not sure whether the device is misbehaving or the desktop environment or the application are mishandling the events from the input device.<br />
<br />
First check if the driver used for your device is what you expect. For example I had mi x11 libinput driver removed by <code style="background-color: #d9ead3;">`dnf autoremove`</code> and had my touchpad taken by <code style="background-color: #d9ead3;">`evdev`</code> thus not working.<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ xinput list-props "SynPS/2 Synaptics TouchPad"
Device 'SynPS/2 Synaptics TouchPad':
Device Enabled (140): 1
Coordinate Transformation Matrix (142): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
Device Accel Profile (275): 0
Device Accel Constant Deceleration (276): 1.000000
Device Accel Adaptive Deceleration (277): 1.000000
Device Accel Velocity Scaling (278): 10.000000
Device Product ID (262): 2, 7
Device Node (263): "/dev/input/event4"
Evdev Axis Inversion (279): 0, 0
Evdev Axis Calibration (280): <no items>
Evdev Axes Swap (281): 0
Axis Labels (282): "Abs MT Position X" (302), "Abs MT Position Y" (303), "Abs MT Pressure" (304), "Abs Tool Width" (301), "None" (0), "None" (0), "None" (0)
Button Labels (283): "Button Left" (143), "Button Unknown" (265), "Button Unknown" (265), "Button Wheel Up" (146), "Button Wheel Down" (147)
Evdev Scrolling Distance (284): 0, 0, 0
Evdev Middle Button Emulation (285): 0
Evdev Middle Button Timeout (286): 50
Evdev Middle Button Button (287): 2
Evdev Third Button Emulation (288): 0
Evdev Third Button Emulation Timeout (289): 1000
Evdev Third Button Emulation Button (290): 3
Evdev Third Button Emulation Threshold (291): 20
Evdev Wheel Emulation (292): 0
Evdev Wheel Emulation Axes (293): 0, 0, 4, 5
Evdev Wheel Emulation Inertia (294): 10
Evdev Wheel Emulation Timeout (295): 200
Evdev Wheel Emulation Button (296): 4
Evdev Drag Lock Buttons (297): 0
</code></pre>
<br />
Usually you'd expect to see `libinput` (synaptics is now <a href="https://wiki.archlinux.org/index.php/Touchpad_Synaptics" target="_blank">abandoned</a>).<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>...
libinput Send Events Mode Enabled (266): 0, 0
libinput Send Events Mode Enabled Default (267): 0, 0
...
</code></pre>
<br />
Fortunately there is a tool to help understand what is device sending to the computer. This works for libinput devices.<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ sudo dnf install evemu
</code></pre>
<br />
Then we can see<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ ls /usr/bin/evemu-*
/usr/bin/evemu-describe /usr/bin/evemu-event /usr/bin/evemu-record
/usr/bin/evemu-device /usr/bin/evemu-play
</code></pre>
<br />
These executable files can be used to inspect, record and replay the events sent by any connected device.<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ sudo evemu-record
Available devices:
/dev/input/event0: Lid Switch
/dev/input/event1: Sleep Button
/dev/input/event2: Power Button
/dev/input/event3: AT Translated Set 2 keyboard
/dev/input/event4: SynPS/2 Synaptics TouchPad
/dev/input/event5: Video Bus
/dev/input/event6: Video Bus
/dev/input/event7: TPPS/2 IBM TrackPoint
/dev/input/event8: Logitech MX Anywhere 2
/dev/input/event9: ThinkPad Extra Buttons
/dev/input/event10: HDA Intel PCH Dock Mic
/dev/input/event11: HDA Intel PCH Mic
/dev/input/event12: HDA Intel PCH Dock Headphone
/dev/input/event13: HDA Intel PCH Headphone
/dev/input/event14: HDA Intel PCH HDMI/DP,pcm=3
/dev/input/event15: HDA Intel PCH HDMI/DP,pcm=7
/dev/input/event16: HDA Intel PCH HDMI/DP,pcm=8
/dev/input/event17: HDA Intel PCH HDMI/DP,pcm=9
/dev/input/event18: HDA Intel PCH HDMI/DP,pcm=10
/dev/input/event19: Integrated Camera: Integrated C
Select the device event number [0-19]: 8
# EVEMU 1.3
# Kernel: 4.14.5-300.fc27.x86_64
# DMI: dmi:bvnLENOVO:bvrR07ET63W(2.03):bd03/15/2016:svnLENOVO:pn20FXS0BB14:pvrThinkPadT460p:rvnLENOVO:rn20FXS0BB14:rvrNotDefined:cvnLENOVO:ct10:cvrNone:
# Input device name: "Logitech MX Anywhere 2"
# Input device ID: bus 0x03 vendor 0x46d product 0x4063 version 0x111
# Supported events:
# Event type 0 (EV_SYN)
# Event code 0 (SYN_REPORT)
# Event code 1 (SYN_CONFIG)
...
B: 15 00 00 00 00 00 00 00 00
A: 20 1 652 0 0 0
################################
# Waiting for events #
################################
E: 0.000001 0002 0000 0001 # EV_REL / REL_X 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.013561 0002 0000 0001 # EV_REL / REL_X 1
E: 0.013561 0002 0001 0001 # EV_REL / REL_Y 1
E: 0.013561 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +13ms
E: 0.039808 0002 0000 0001 # EV_REL / REL_X 1
E: 0.039808 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +26ms
E: 0.063578 0002 0000 0001 # EV_REL / REL_X 1
E: 0.063578 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +24ms
E: 0.071790 0002 0000 0001 # EV_REL / REL_X 1
E: 0.071790 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +8ms
E: 0.087586 0002 0000 0001 # EV_REL / REL_X 1
E: 0.087586 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +16ms
E: 0.111578 0002 0001 0001 # EV_REL / REL_Y 1
E: 0.111578 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +24ms
...
</code></pre>
<br />
Decoding those would be left for another post or as an exercise for the reader. At the very least one can prepare logs while things are misbehaving and then report bugs to the affected projects with the logs attached. Make sure to read <code style="background-color: #d9ead3;">`man evemu-record`</code> to check for a common issues preventing event capturing.<br />
<br />
-- thanks to <a href="https://github.com/whot">Peter Hutterer</a> for pointing me at this toolAleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com1tag:blogger.com,1999:blog-5951025555891494293.post-60048121694253123832016-09-07T22:28:00.005+03:002016-09-07T22:57:21.850+03:00Jenkins vs OpenShift Quick ReferenceI found it hard to find an overview about jenkins integration with OpenShift. So decided to do a short write-up of what I know and I've found.<br />
<br />
OpenShift v3 <br />
<ul>
<li><a href="https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin" target="_blank">Launching Jenkins slaves off external OpenShift v3 cluster</a> </li>
<li><a href="https://github.com/openshift/jenkins-plugin" target="_blank">Pipelines and workflows plugin to communicate with OpenShift v3</a></li>
<ul>
<li>also <a href="https://wiki.jenkins-ci.org/display/JENKINS/OpenShift+Pipeline+Plugin" target="_blank">here</a></li>
</ul>
<li><a href="https://docs.openshift.org/latest/using_images/other_images/jenkins.html" target="_blank">Running Jenkins 1.x inside OpenShift v3</a></li>
<ul>
<li><a href="https://github.com/openshift/jenkins" target="_blank">upstream project doc</a></li>
<li>blog <a href="https://blog.openshift.com/openshift-3-2-jenkins-s2i-slave-pods/" target="_blank">Jenkins S2I and slave pods</a> </li>
</ul>
<li><a href="https://blog.openshift.com/pipelines-with-jenkins-2-on-openshift/" target="_blank">Blog: Pipelines and running Jenkins 2 inside OpenShift v3</a> </li>
</ul>
OpenShift v2 (outdated)<a href="https://developers.openshift.com/managing-your-applications/continuous-integration.html" target="_blank"><br /></a><br />
<ul>
<li><a href="https://developers.openshift.com/managing-your-applications/continuous-integration.html" target="_blank">OpenShift v2 with running jenkins inside</a></li>
<li><a href="https://blog.openshift.com/deploy-openshift-external-jenkins-instance/" target="_blank">Deploy to OpenShift v2 from an External Jenkins Instance</a></li>
<li>No plugin to run jenkins slaves inside OpenShift v2</li>
</ul>
Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com1tag:blogger.com,1999:blog-5951025555891494293.post-37518908661544419802016-06-01T00:48:00.000+03:002020-04-01T20:08:15.544+03:00Creating docker images suitable for OpenShift (ssh-git image HowTo)<h1>
Intro </h1>
This is not going to be a detailed guide for creating docker images. I'll present an example ssh-git image and highlight the more important concerns for running such image on OpenShift v3. Things are basically in documentation but I hope to just get you started quickly. (update: wow I thought it's gonna be a few lines but it turned into a monster)<br />
<br />
tl;dr; skip to the OpenShift section<br />
<h1>
Plain Docker image</h1>
Starting with little docker experience and no knowledge about OpenShift requirements I just went ahead to create a standard SSH server image and because of a nice git feature, one can just create a local `bare` repo to be served over SSH (to whoever has a matching key in ~/.ssh/authorized_keys).<br />
<br />
I looked around but found only a few Ubuntu examples. My favorite distro is Fedora (I'm affiliated but still) so thought it's a shame and went ahead to create a fedora based Dockerfile. In fact it was pretty much pain-free. Here's my initial version running OpenSSH as root:<br />
<br />
<a href="https://github.com/openshift-qe/ssh-git-docker/blob/master/ssh-git-root/Dockerfile">https://github.com/openshift-qe/ssh-git-docker/blob/master/ssh-git-root/Dockerfile</a><br />
<br />
The interesting points are:<br />
<ul>
<li>`FROM fedora:latest`</li>
<li>`RUN ...` - pretty much standard commands to install ssh and configure a user; I usually also do `restorecon -R ~/.ssh` but inside docker selinux is nil, thus that's skipped.</li>
<li>`EXPOSE 22` - so that docker knows which ports are needed</li>
<li><span class="pl-k">`CMD</span> ssh-keygen -A && exec /usr/sbin/sshd -D` - here interesting part is generating keys as OpenSSH can't work properly otherwise </li>
</ul>
<h1>
Building, running, tagging, pushing </h1>
<h3>
Building </h3>
<pre style="background-color: #d9ead3; overflow-x: scroll;"># docker build -t docker.io/myaccount/imagename:latest PATH</pre>
<br />
Where latest can also be another version.<br />
<h3>
Tagging</h3>
<pre style="background-color: #d9ead3; overflow-x: scroll;"># docker tag docker.io/myaccount/imagename:latest docker.io/myaccount/imagename:1.0</pre>
<br />
As image you can use a tag or an image hash. Doesn't matter.<br />
<h3>
Running</h3>
Launch container with ports exposed and giving container a name.<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"># docker run -d -P --name ssh-git-server myaccount/imagename:latest</pre>
btw you can try the built image from `aosqe/ssh-git-server:root-20150525`<br />
<br />
Get exposed port number so you can use it later.<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"># docker port ssh-git-server 22</pre>
<br />
Put your ssh public key in.<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"># docker exec ssh-git-server bash -c 'echo "ssh-rsa ..." > /home/git/.ssh/authorized_keys'</pre>
<br />
Clone the sample repo:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ git clone ssh://git@localhost:32769/repos/sample.git</pre>
<h3>
Terminating</h3>
<pre style="background-color: #d9ead3; overflow-x: scroll;"># docker rm ssh-git-server
# docker rmi <image tag> # to get rid of the image locally
</pre>
<h3>
Sharing with others (pushing)</h3>
Then you can push these images to dockerhub:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"># docker login docker.io
# docker push docker.io/myaccount/imagename:lates
</pre>
<h1>
Image where SSHd runs as a regular user</h1>
I knew OpenShift doesn't let you run images as root so next step was to create an image where OpenSSH runs as the `git` user. In fact it allows you, but you have to grant your user extra privileges and there is really no good reason to do that for a ssh-git server. <strike>Also a future OpenShift Online service would not allow such extra privileges for security reasons.</strike> At some point it is likely secure root pods to be allowed using <a href="http://rhelblog.redhat.com/2015/07/07/whats-next-for-containers-user-namespaces/" target="_blank">user namespaces</a> with some performance penalty.<br />
<br />
<br />
That was even less painful thanks to an old <a href="https://cygwin.com/ml/cygwin/2008-04/msg00363.html" target="_blank">post in cygwin list</a>. Basically <a href="https://lists.mindrot.org/pipermail/openssh-unix-dev/2017-August/036168.html" target="_blank"><strike>privilege separation needs to be turned off as it can only work as root</strike></a> and adjust some locations in sshd_config using `sed`. Finally little `chown/chmod` adjustments. And before I forget, port cannot be 22 so I selected 2022.<br />
<br />
So new things are adding more `RUN` commands and the `USER git` directive so final CMD is run as the user instead root. Here's the result:<br />
<br />
<a href="https://github.com/openshift-qe/ssh-git-docker/blob/master/ssh-git-user/Dockerfile">https://github.com/openshift-qe/ssh-git-docker/blob/master/ssh-git-user/Dockerfile</a><br />
<br />
You can try:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"># docker run -d -P --name ssh-git-server aosqe/ssh-git-server:git-20150525</pre>
<br />
But testing this on OpenShift I've got the strange error message:<br />
<br />
<code>No user exists for uid 1000530000</code><br />
<br />
I was stuck here for a little while until I could figure that error is not produced by OpenShift but ssh server itself.<br />
<h1>
OpenShift ready Image</h1>
What I found out (see the <a href="https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines" target="_blank">official guidelines</a> in the References section) is that regardless of your `USER` directive in Dockerfile, unless you give the user or service account that would launch pod as some random UID. The group will be static though - root. <br />
<br />
Because that random UID will not be part of the passwd file, some programs will fail to start with an error message like what I saw above. Another issue is that pre-setup of SSH becomes impossible as some files need to be with permissions 700 for ssh to accept them. Obviously as a random UID we cannot repair that once pod stats. <br />
<br />
Here's how I approached:<br />
<ol>
<li>move most setup to the container start CMD </li>
<li>make a couple directories writable to the root group so that step #1 can create necessary new files (this time with proper owner and permissions)</li>
<li>make `passwd` root group writable so that we can fix our UID (<a href="https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines" target="_blank">official guideline</a> suggests using nss wrapper but I thought it's easier to just fix in-place)</li>
</ol>
End result is otherwise basically the same thing, just moving around the commands:<br />
<br />
<a href="https://github.com/openshift-qe/ssh-git-docker/blob/master/ssh-git-openshift/Dockerfile">https://github.com/openshift-qe/ssh-git-docker/blob/master/ssh-git-openshift/Dockerfile</a><br />
<br />
btw I have to change the multi-line CMD to a shell script and add to image. Would be easier to customize. <br />
<h1>
Doing it the OpenShift way</h1>
Since I've got an <a href="https://www.openshift.com/enterprise/" target="_blank">OpenShift Enterprise</a> environment running I thought to use it directly instead of using plain docker commands (which would also work fine):<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc new-build -name=git-server --context-dir=ssh-git-openshift https://github.com/openshift-qe/ssh-git-docker</pre>
<br />
FYI you can append `#branch` if you want to build of non-default branch. The good thing using this approach is that image can be rebuilt automatically when base image (fedora:latest) changes and when your code changes. You may need to configure hooks though. See <a href="https://docs.openshift.org/latest/dev_guide/builds.html#build-triggers" target="_blank">triggers doc</a>.<br />
<br />
To monitor build:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc logs -f bc/git-server</pre>
<br />
In the log, you will see something like (can be useful later):<br />
<blockquote class="tr_bq">
The push refers to a repository [172.30.2.69:5000/a9vk7/git-server]</blockquote>
Now run the image by:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc new-app git-server:latest --name=git-server</pre>
<br />
You would end up with a deployment config called git-server that creates a replication controller `git-server-1` that keeps one pod called `git-server-...` running from the `git-server` image stream created by the `new-build` command. Also a service called `git-server` is created that will provide you with a stable IP to access the pod + its name can be used as a hostname of the git server in any pod or build that happens within the same project.<br />
<br />
One last detail is to make service listen on port 22 for nicer git URLs:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc edit svc git-server # change `port` to 22 from 2022</pre>
<br />
Note that services can only be accessed from pods running in same project or in project 'default'. To access service from the internet, you need to create a nodePort service. Because this is not HTTP based, we can't use regular routes. Hope to get on that later.<br />
<br />
To see you pod name and use it, you can do:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc get pod # see pod name</pre>
<br />
Then:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc rsh git-server-... # configure ssh keys there, create repos, etc.</pre>
<br />
Now once you have your public key in the pod, you can access this server from other pods. You can do for trying out from the server pod itself. Provided you have the matching private key. While into `rsh` do:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ git clone git@git-server:sample.git</pre>
<br />
To push your image to dockerhub, see how to <a href="https://docs.openshift.org/latest/dev_guide/builds.html#using-docker-credentials-for-pushing-and-pulling-images" target="_blank">set build config output</a>. Or you can manually ssh to the OpenShift node and do as root:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"># docker tag 172.30.2.69:5000/a9vk7/git-server docker.io/myaccount/imagename:latest
# docker login docker.io
# docker push docker.io/myaccount/imagename:latest
</pre>
If you want to run your image off dockerhub, you can do:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;">$ oc run git-server --image=aosqe/ssh-git-server-openshift
$ oc expose dc git-server --port=22 --target-port=2022
$ oc set probe dc/git-server --readiness --open-tcp=2022
</pre>
<br />
Setting the probe lets your replication controller notice pod is dead and spawn a new one.<br />
<br />
Some words about persistent volumes<br />
The way images I refer to above are built would cause any changes in public keys and repo data to be lost upon pod restart. To avoid that <a href="https://docs.openshift.org/latest/dev_guide/persistent_volumes.html" target="_blank">persistent volumes</a> need to be used.<br />
Persistent volumes at attach time will be chowned to the current UID of the pod. Provided the OpenShift ready image does setup at launch time, that should be easily supportable. i.e. mount volume to /home/git/<br />
<br />
But a few changes will still need to be done:<br />
<ul>
<li>creation of sample git repo needs to be conditional, when it doesn't exist</li>
<li>`sshd_config` and ssh-keygen should create host keys somewhere in `git` user home dir to keep host keys between pod restarts</li>
</ul>
<h1>
Future work </h1>
<ul>
<li>make the OpenShift ready image runnable off a persistent volume </li>
<li>add info about making repo accessible from the Internet</li>
<li>convert multi-line CMD to a startup script</li>
</ul>
Current post is based on the initial commit of the docker files in the repo. <br />
<ul></ul>
<h1>
References</h1>
<ul>
<li>Dockerfile git repo - <a href="https://github.com/openshift-qe/ssh-git-docker">https://github.com/openshift-qe/ssh-git-docker</a></li>
<li>image for OpenShift - <a href="https://hub.docker.com/r/aosqe/ssh-git-server-openshift/">https://hub.docker.com/r/aosqe/ssh-git-server-openshift/</a> </li>
<li>plain docker images - <a href="https://hub.docker.com/r/aosqe/ssh-git-server">https://hub.docker.com/r/aosqe/ssh-git-server</a></li>
<li>OpenShift Image guidelines - <a href="https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines">https://docs.openshift.org/latest/creating_images/guidelines.html</a></li>
</ul>
Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com5tag:blogger.com,1999:blog-5951025555891494293.post-27493556559216757012016-05-13T18:20:00.001+03:002016-10-01T10:48:54.762+03:00quick debugging KVM VM issuesSee a hang or infinite loop, or perf issue with VM on KVM? Here's how to get a trace of it so a bugzilla report can be meaningful:<br />
<br />
First attach VM configuration XML. That is obtained by:<br />
<br />
> sudo virsh dumpxml [vm_name] > some_file<br />
<br />
Cole Robinson wrote on 09/23/2014 04:24 PM:<br />
> sudo debuginfo-install qemu-system-x86<br />
> <br />
> Then on the next hang, grab the pid of the busted VM from ps axwww, and do:<br />
> <br />
> sudo pstack $pid<br />
> <br />
> The dump that output in a bug report, along with<br />
> /var/log/libvirt/qemu/$vmname.log. File it against qemu<br />
<br />
Also interesting might be system log from Host and guest. On Fedora you can obtain it by a command similar to:<br />
<br />
> sudo journalctl --system --since today Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-57614240573439258632016-05-10T16:31:00.001+03:002018-01-10T14:45:41.562+02:00replicating HTTP Server replies using ncat and socatI was looking at an issue that rest-client ruby gem raised an error on `#cookies_jar` on one particular server while it worked fine on a couple of public servers I tried [1].<br />
<br />
I was just going to write a simple script to serve as a HTTP server to return me same response as the offending HTTP server but hey, I thought, there must be an easier way.<br />
<br />
So I just obtained raw response from original server, put it into a file and asked netcat to listen and give it back on request.<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>$ cat > response.raw << "EOF"
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 36
Content-Type: text/html; charset=utf-8
Last-Modified: Mon, 11 Apr 2016 05:39:53 GMT
Server: Caddy
Date: Tue, 10 May 2016 08:10:17 GMT
Set-Cookie: OPENSHIFT_x7xn3_service-unsecure_SERVERID=c72192d7fe9c33d8dec083448dd4f40f; path=/; HttpOnly
Cache-control: private
Hello-OpenShift-Path-Test http-8080
EOF
$ nc -l 8080 < </code>response.raw</pre>
## on another console<br />
<pre style="background-color: #d9ead3;"><code>$ curl -v localhost:8080 </code></pre>
<br />
That's the simplest I could get. It will return the same thing regardless of path and query string you put in your client URL. e.g. this will work the same:<br />
<br />
<pre style="background-color: #d9ead3;"><code>$ curl -v localhost:8080/path&asd=5</code></pre>
<br />
Now if you want your server to return something multiple times, then you can try<br />
<br />
<pre style="background-color: #d9ead3;"><code>$ nc -kl 8080 -c 'cat </code>response.raw'</pre>
<br />
Another option if your system lacks netcat is the `socat` utility.<br />
<br />
<pre style="background-color: #d9ead3;"><code>$ socat TCP-LISTEN:8080,fork EXEC:"cat response.raw"</code> </pre>
<br />
If you remove `fork` from the options, it will exit after first connection served. But we can also listen over HTTPS:<br />
<br />
<pre style="background-color: #d9ead3;"><code>$ socat OPENSSL-LISTEN:8080,cert=/path/cert.pem,verify=0 EXEC:"cat response.raw"</code></pre>
<br />
Again, add `fork` option to keep listening. This above will ignore client certificate. In fact you can create proper client cert and configure SSL verify. But that's beyond today's topic. FYI, use `socat` version 1.7.3.1+, otherwise you'd be hit with weak DH key used [2]. As a workaround you could generate DH key in a file and provide it with the `dhparams` option to socat.<br />
<br />
[1] https://github.com/rest-client/rest-client/issues/487<br />
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1021946Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-31896120081393007532016-04-15T15:52:00.002+03:002016-04-15T15:59:25.737+03:00ruby calling methods without parentheses can be misleadingThis one stayed in my draft for an year maybe. Thought to add a short explanation and publish it. <br />
<blockquote class="tr_bq">
[1] pry(main)> class Test<br />
[1] pry(main)* def gah<br />
[1] pry(main)* puts "gah"<br />
[1] pry(main)* end <br />
[1] pry(main)* def fah<br />
[1] pry(main)* gah<br />
[1] pry(main)* puts gah<br />
[1] pry(main)* gah="now local variable"<br />
[1] pry(main)* puts gah<br />
[1] pry(main)* gah()<br />
[1] pry(main)* end <br />
[1] pry(main)* end <br />
<br /></blockquote>
<blockquote>
[3] pry(main)> Test.new.fah<br />
gah<br />
gah<br />
<br />
now local variable<br />
gah</blockquote>
Basically, if for some reason a variable is defined in current context with the same name as a method, then calling that method later, may result in using the variable instead of the method.<br />
<br />
Calling `gah` in the beginning results in calling the instance method `#gah`. But after we do `gah="now local variable"` then calling `gah` results in obtaining the local variable `gah` value. Finally calling `gah()` always results in calling the instance method.<br />
<br />
Simple thing but can be confusing. Solution would be to always call methods using parentheses or make your methods short to easily spot any such mistakes.<br />
<br />
My initial reaction when I first learn you can call methods without parentheses in ruby was that it is a bad idea. Then I became lazy and stopped using them. How can we be so lazy? Only 2 characters? Oddly enough I don't feel like starting to write parentheses again.Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-8919094302115984572016-02-15T22:33:00.001+02:002016-02-15T23:38:06.099+02:00rsync to/from OpenShift v3 pods<b>Update:</b> Some pointed out `<a href="https://docs.openshift.org/latest/dev_guide/copy_files_to_container.html" target="_blank">oc rsync</a>` command already exist. Shame on me i missed that. My only solace is that it does not support all rsync options (yet). Read below only if standard command does not work for you (or if curious to know how it works). <br />
<br />
I was thinking about easy copying files to/from <a href="http://openshift.com/" target="_blank">OpenShift</a> pods and thought it would be awesome if I can make `rsync` use `oc exec` instead of ssh to perform that. This should not be a common use case as pods should generally be stateless but one may want to backup data from a persistent volume for example when environment is not in control the pod owner's control.<br />
<br />
You may already know that ssh access is not available to OpenShift/Kubernates managed pods but at least in openshift one can use the client tool to access them in a ssh-like fashion. That's done using the `oc rsh` and `oc exec` sub-commands.<br />
<br />
In fact `oc rsh` only wraps `oc exec` by adding it's options `-i` to pass stdin to remote process as well `-t` for terminal. For rsync we only need `-i` though. Here's the magic incantation:<br />
<br />
<pre style="background-color: #d9ead3;">$ rsync -av -e 'oc exec -n fs1d4 -i myapp-1-vytqm' -- /tmp/ec2-user/ --:haha/
sending incremental file list
created directory haha
./
gagag
brewer/
sent 142 bytes received 38 bytes 72.00 bytes/sec
total size is 4 speedup is 0.02
</pre>
<br />
Let me dissect that for you. First we use the `-a` option of rsync because I want recursive sync keeping all file properties, as well `-v` to see what actually happened. You can add and mix any other `rsync` options here like `--delete`, `--exclude`, etc.<br />
<br />
Then we specify the rsh command and that is 'oc exec -n fs1d4 -i myapp-1-vytqm' where `fs1d4` is your project name and `myapp-1-vytqm` is the desired pod name. You can use any other `oc` option here like `--config`, `--container`, etc. The important points are:<br />
<br />
<ul>
<li>keep the `-i` option so that rsync can talk to remote process</li>
<li>do not include the `--` option terminator so that rsync can later add it</li>
</ul>
<br />
After that we use the `--` option terminator to tell rsync threat any further command parameters as path location specifiers and not options. This is important because our hack forces us to use `--` as a hostname in SRC or DST location specifier. More on that later.<br />
<br />
Our SRC location specifier is `/tmp/ec2-user/` and that is a local test directory. The DST location specifier is `--:haha/` which means relative path `haha/` on host `--`. The reason is that if we only specify "haha/", then rsync will consider that location local and will not invoke the remote shell. So we need to specify some hostname. But whatever we specify, it will break our `oc exec` command. So I figured, I can specify `--` for the hostname and rsync will just append to the remote shell command. And in fact we need `--` appended, otherwise the remote call to `rsync` will fail.<br />
<br />
You may already know that local `rsync` calls `rsync` on the remote end and then the two processes communicate over stdin/stdout. To avoid `oc exec` interpret options of the remote `rsync` invocation, we need that `--` as a hostname`. And to avoid `--:haha/` being interpreted as an option to local `rsync` invocation, we need the first `--` above.<br />
<br />
I doubt my ability to explain but hopefully gave you some pointers how to make it work. Here are a few other important points:<br />
<br />
<ul>
<li>make sure your pod has `rsync` already installed and pointed to in the pod PATH variable. Or use `--rsync-path` option. </li>
<li>make sure you have write access to destination dir</li>
<li>make sure you're already logged into OpenShift (oc login)</li>
<li>make sure to write correct commands as mistakes usually produce hardly informative output (`strace -f` helps to debug)</li>
</ul>
<br />
Hope that helps somebody.Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-66988849932861028312015-11-26T21:00:00.000+02:002015-11-26T21:22:23.110+02:00redsocks on Fedora/Red Hat Enterprise Linux/CentOSThere is an amazing piece of software - <a href="http://darkk.net.ru/redsocks/" target="_blank">redsocks</a>. What is basically does is listen for TCP (also has some UDP and DNS support), and if anything comes its way, it redirects through a SOCKS4/5 server or HTTP Connect/Relay proxy.<br />
<br />
On linux all one needs is to make iptables DNAT desired connections to the ip/port the redsocks daemon is listening.<br />
<br />
My only issue was that the redsocks package not available under Fedora/Red Hat Enterprise Linux. I figured debian has a package with systemctl integration so thought to leverage that work. But Debian GNU Linux uses .deb package format, while fedora uses .rpm.<br />
<br />
Fedora comes with the `alien` package that can convert between deb an rpm. So I downloaded the deb from <a href="https://packages.debian.org/search?keywords=redsocks" target="_blank">Debian testing</a> and gave it a try:<br />
<br />
$ sudo alien -r redsocks_0.4+dfsg-2_amd64.deb<br />
Warning: Skipping conversion of scripts in package redsocks: postinst postrm prerm<br />
Warning: Use the --scripts parameter to include the scripts.<br />
redsocks-0.4+dfsg-3.x86_64.rpm generated<br />
$ ls<br />
redsocks_0.4+dfsg-2_amd64.deb redsocks-0.4+dfsg-3.x86_64.rpm<br />
$ yum install redsocks-0.4+dfsg-3.x86_64.rpm<br />
<...><br />
Transaction check error:<br />
file / from install of redsocks-0.4+dfsg-3.x86_64 conflicts with file from package filesystem-3.2-20.el7.x86_64<br />
file /lib from install of redsocks-0.4+dfsg-3.x86_64 conflicts with file from package filesystem-3.2-20.el7.x86_64<br />
file /usr/sbin from install of redsocks-0.4+dfsg-3.x86_64 conflicts with file from package filesystem-3.2-20.el7.x86_64<br />
file /etc/init.d from install of redsocks-0.4+dfsg-3.x86_64 conflicts with file from package chkconfig-1.3.61-5.el7.x86_64<br />
<...> <br />
<br />
Dang! `alien` got libevent dependency just right but wants to take ownership of a couple of dirs that should not be generally touched. After some searching I found <a href="https://ask.fedoraproject.org/en/question/37185/file-conflict-for-installing-a-package-with-filesystem/" target="_blank">the solution</a> - install `rpmrebuild` to remove the unnecessary dirs that conflict. Don't worry, actual files are not removed. Only the directory entries inside the RPM content.<br />
<br />
$ rpmrebuild -pe redsocks-0.4+dfsg-3.x86_64.rpm <br />
Do you want to continue ? (y/N) y<br />
result: /home/user/rpmbuild/RPMS/x86_64/redsocks-0.4+dfsg-3.x86_64.rpm<br />
<br />
`rpmrebuild` in this way will present you with a text editor. Find '%files' section and remove all directory entries not specific to redsocks (i.e. "/", "/lib", ...).<br />
<br />
Finally create the user that will daemon run as (part of debian postinstall script):<br />
<br />
$ sudo adduser --system --home /var/run/redsocks --no-create-home -U redsocks<br />
<br />
And that's all. Now do `yum install /home/user/rpmbuild/RPMS/x86_64/redsocks-0.4+dfsg-3.x86_64.rpm` and that should be it. Configuration can be found in /etc/redsocks.conf and `systemctl enable redsocks; systemctl start redsocks`.Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-15476797071239048552015-09-14T15:00:00.000+03:002015-09-14T15:00:01.613+03:00Copy files to running OpenShift 3 podsNeeded to copy a newer mysql client for debugging purposes into an <a href="https://www.openshift.com/">OpenShift v3</a> pod. /fortunately the mysql client binary is self contained and does not need special shared libs; worked with v5.6.26/<br />
<br />
Here's how I managed to do so:<br />
<br />
<span style="color: #6fa8dc;">oc exec -i [pod name] -n [project] -- bash -c 'cat > [remote file]' < [local file]</span><br />
<br />
e.g.<br />
<span style="color: #6fa8dc;">oc exec -i mysql-55-centos7-1-f42js -n 0wwwv -- bash -c 'cat > /tmp/mysql' < /tmp/mysql</span><br />
<br />
What does it do?<br />
<br />
Invokes `exec` command with `-i` option so that stdin is passed (otherwise stdin is ignored). The remote command cats everything into desired path. You need to `chmod 755 [file]` inside the pod if that file is to be executed.<br />
<br />
To get into the pod in a shell try `oc rsh`.Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-53270284449334048072015-07-21T22:23:00.001+03:002015-07-21T22:29:20.014+03:00HTTP Proxy vs overriding hostnameThere's known trick with `curl` to connect a server via IP address and override the virtual host you connect to using the `Host:` header. Or using the `--resolve` option. This is mainly useful to try test server instances that don't have proper DNS records set.<br />
<br />
<blockquote class="tr_bq">
curl -H "Host: abc.example.com" 8.7.6.5<br />
curl --resolve abc.example.com:80:8.7.6.5 http://abc.example.com</blockquote>
<br />
This is quiet handy. And in unfortunate circumstances one can need that but behind a proxy. So I was banging my head to understand how to do it. Eventually I resorted to looking at <a href="https://www.ietf.org/rfc/rfc2068.txt">rfc2068</a>.<br />
<br />
The news are not good (as much as probably very little people care) - there is no way to perform that hack over a standard HTTP Proxy. Here's and excerpt:<br />
<blockquote class="tr_bq">
<pre>5.2 The Resource Identified by a Request
HTTP/1.1 origin servers SHOULD be aware that the exact resource
identified by an Internet request is determined by examining both the
Request-URI and the Host header field.
An origin server that does not allow resources to differ by the
requested host MAY ignore the Host header field value. (But see
section 19.5.1 for other requirements on Host support in HTTP/1.1.)
An origin server that does differentiate resources based on the host
requested (sometimes referred to as virtual hosts or vanity
hostnames) MUST use the following rules for determining the requested
resource on an HTTP/1.1 request:
1. If Request-URI is an absoluteURI, the host is part of the
Request-URI. Any Host header field value in the request MUST be
ignored.
2. If the Request-URI is not an absoluteURI, and the request
includes a Host header field, the host is determined by the Host
header field value.
3. If the host as determined by rule 1 or 2 is not a valid host on
the server, the response MUST be a 400 (Bad Request) error
message.
Recipients of an HTTP/1.0 request that lacks a Host header field MAY
attempt to use heuristics (e.g., examination of the URI path for
something unique to a particular host) in order to determine what
exact resource is being requested.
</pre>
</blockquote>
There are two ways to ask a HTTP proxy to perform a request:<br />
<blockquote class="tr_bq">
GET http://abc.ecample.com/ HTTP 1.0</blockquote>
or<br />
<blockquote class="tr_bq">
GET / HTTP 1.0<br />
Host: abc.ecample.com</blockquote>
In the first form (point #1 above), even if `Host` header is provided, the proxy server must ignore it and send to the target web server whatever host is provided in the absolute URL. That's why it's technically impossible to do this over a HTTP proxy.<br />
<br />
This doesn't mean you can't do it with a socks server or if proxy allows CONNECT to the desired destination port.<br />
<br />
With socks perhaps `curl` should do just fine (haven't tested). Using CONNECT to port 80 though would require manual interaction with the socket or a little programming as standard tools would not use CONNECT for simple non-encrypted HTTP connections.<br />
<br />
Premium content:<br />
Here's how to use proxy with auth via telnet:<br />
<a href="http://unix.stackexchange.com/a/36632/14907">http://unix.stackexchange.com/a/36632/14907</a><br />
<br />
And here's a good example how to use the CONNECT method manually:<br />
<a href="http://blog.vivekjishtu.com/2013/09/using-connect-method-on-http-proxy.html">http://blog.vivekjishtu.com/2013/09/using-connect-method-on-http-proxy.html</a>Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-86836846763698111702015-07-21T12:30:00.000+03:002015-07-21T12:30:52.643+03:00How to get certificate chain by rubyLooking on the Internet I couldn't find any examples how to get a hold on a remote server cerificate chain. But it turned out to be rather simple looking at ruby 2.2.2 api docs.<br />
<br />
<a href="https://gist.github.com/akostadinov/fa54e7bc5a1858af5677">https://gist.github.com/akostadinov/fa54e7bc5a1858af5677</a><br />
<br />
Thought might be useful to somebody to post in public.. Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-86779016610603503692015-07-09T20:52:00.001+03:002018-01-10T14:57:34.764+02:00Apache HTTPd SSL/OCSP primerI just stumbled over some old howto I used for testing OSCP. Here I'm pasting it to hopefully help somebody understand how it works. It assumes you have already up and running Apache HTTPd web server.<br />
<br />
<h2>
Create needed certificates</h2>
Download this useful script:<br />
<a href="https://git.fedorahosted.org/cgit/pkinit-nss.git/plain/doc/openssl/make-certs.sh"><span style="font-family: "andale mono" , "times";">https://git.fedorahosted.org/cgit/pkinit-nss.git/plain/doc/openssl/make-certs.sh</span></a><br />
Use the script to generate certifictes:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>./make-certs.sh europa.sfo.corp.google.com test@example.com all ocsp:http://europa.sfo.corp.google.com/
./make-certs.sh america.sfo.corp.google.com test@example.com all ocsp:http://europa.sfo.corp.google.com/</code></pre>
<br />
<h2>
Prepare and start responder using openSSL</h2>
Create index.txt file which contains information about certificates the responder is handling.<br />
File is plain text file with fields separated by TAB.<br />
Here is example of structure:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code><span style="font-family: "andale mono" , "times";">V 100320100000Z 593C5290F246444B unknown DC=com, DC=example/mail=test@example.com, CN=europa.sfo.corp.google.com
V B9290C71D224ACB3 unknown DC=com, DC=example, CN=Test Certifying CA
R 131021200751Z 100324142709Z,superseded 593C5290F246444C unknown DC=com, DC=example/mail=test@example.com, CN=america.sfo.corp.google.com</span></code></pre>
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code><span style="font-family: "arial" , "helvetica" , sans-serif;">The columns are defined as</span>:
<span style="font-family: "andale mono" , "times";">#define DB_type 0 /* Status of the certificate */
#define DB_exp_date 1 /* Expiry date */
#define DB_rev_date 2 /* Revocation date */
#define DB_serial 3 /* Serial No., index - unique */
#define DB_file 4
#define DB_name 5 /* DN, index - unique when active and not disabled */
</span></code></pre>
<br />
<b><i>Notes:</i></b><br />
- DB_type could be only V | R, E is not working<br />
- date format is YYMMDDHHMMSSZ<br />
- DB_serial has to be in HEX with upper case letters<br />
- use example provided above, just don't forget to change serial numbers<br />
<br />
<b><i>Start responder using:</i></b><br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>openssl ocsp -index index.txt -port 8088 -rsigner certs/ca.pem -CA certs/ca.pem -text</code></pre>
<br />
<b><i>Check if responder is working fine using:</i></b><br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code><span data-mce-style="font-family: andale mono,times;" style="font-family: "andale mono" , "times";">openssl ocsp -issuer certs/ca.pem -CAfile certs/ca.pem -url</span> http://localhost:8088 -<span data-mce-style="font-family: andale mono,times;" style="font-family: "andale mono" , "times";">cert certs/europa.sfo.corp.google.com.pem</span></code></pre>
should return GOOD certificate status<br />
<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code><span data-mce-style="font-family: andale mono,times;" style="font-family: "andale mono" , "times";">openssl ocsp -issuer certs/ca.pem -CAfile certs/ca.pem -url </span>http://localhost:8088 -<span data-mce-style="font-family: andale mono,times;" style="font-family: "andale mono" , "times";">cert certs/america.sfo.corp.google.com.pem</span></code></pre>
should return REVOKED ... reason: superseded<br />
<h2>
Modify $EWS_HOME/httpd/conf.d/ssl.conf:</h2>
- add this to the end of file:<br />
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code><span data-mce-style="font-family: andale mono,times;" style="font-family: "andale mono" , "times";">#OCSP
SSLVerifyClient on
SSLVerifyDepth 10
SSLOCSPEnable on
SSLOCSPDefaultResponder </span>http://localhost:8088/<span data-mce-style="font-family: andale mono,times;" style="font-family: "andale mono" , "times";">
SSLOCSPOverrideResponder on</span></code></pre>
- httpd has to ask for client certificate and enable OCSP and set default responder<br />
<br />
- uncomment <code style="background-color: #d9ead3;">SSLCACertificateFile /etc/pki/tls/cert.pem</code> line in the SSL configuration file and make sure your new CA is added to the cert.pem file<br />
or just point this to oscp/certs/ca.crt (generated in one of previous steps)<br />
<br />
- copy some static content to $EWS_HOME/httpd/www/html/<br />
e.g. <code style="background-color: #d9ead3;">echo "OCSPTestSucceed" > $EWS_HOME/httpd/www/html/ocsp.txt</code><br />
<br />
- start apache<br />
<br />
<h2>
Now try it out:</h2>
<h5>
run:</h5>
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code><span data-mce-style="font-family: andale mono,times;" style="font-family: "andale mono" , "times";">wget --output-document=/dev/null --no-check-certificat --certificate=client_cert/america.sfo.corp.google.com.pem --ca-certificate=client_cert/ca.pem </span>https://<your server>/ocsp.txt</code></pre>
<h5>
result:</h5>
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>Resolving <your server>... 10.34.34.43
Connecting to <your server>|10.34.34.43|:443... connected.
OpenSSL: error:14094414:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate revoked
Unable to establish SSL connection.
</code></pre>
<br />
<h5>
run:</h5>
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code><span data-mce-style="font-family: andale mono,times;" style="font-family: "andale mono" , "times";">wget --output-document=/dev/null --no-check-certificat --certificate=client_cert/europa.sfo.corp.google.com.pem --ca-certificate=client_cert/ca.pem </span>https://<your server>/ocsp.txt
</code></pre>
<br />
<h5>
result:</h5>
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>HTTP request sent, awaiting response... 200 OK</code></pre>
<br />
<h2>
References</h2>
<ul>
<li><a href="http://www.openssl.org/docs/apps/ocsp.html">http://www.openssl.org/docs/apps/ocsp.html</a></li>
<li><a href="http://www.imperialviolet.org/2009/12/20/setting-up-ocsp.html">http://www.imperialviolet.org/2009/12/20/setting-up-ocsp.html</a></li>
<li><a href="http://www.mail-archive.com/openssl-users@openssl.org/msg45982.html">http://www.mail-archive.com/openssl-users@openssl.org/msg45982.html</a></li>
<li><a href="http://groups.google.com/group/mailing.openssl.users/browse_thread/thread/2579b88ea16197b5">http://groups.google.com/group/mailing.openssl.users/browse_thread/thread/2579b88ea16197b5 </a></li>
</ul>
<h2>
Attribution</h2>
Thanks to my colleague <a href="https://developer.jboss.org/people/rrajesh">Rajesh</a> for preparing initial info.<br />
<ul></ul>
Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0tag:blogger.com,1999:blog-5951025555891494293.post-58583649505010968672015-07-08T21:45:00.001+03:002018-01-10T14:48:55.723+02:00OpenShift v3 REST API usageTime for a quick v3 api trial. The REST api is completely changed. First notable thing is authentication. Basic auth is no longer supported. Now the only supported auth types are <b>oauth token</b> and <b>client SSL certificate</b>.<br />
<br />
Lets see how we can get an oauth token. First method is to go with a <b>browser </b>to <b>https://your.openshift.master.server.example.com:8443/oauth/token/request</b><br />
<br />
That will present you with a nice page explaining you how to use your newly acquired token. But we want to perform this in a more machine friendly manner so let's do with <b>openshift-challenging-client</b> via curl:<br />
<blockquote class="tr_bq">
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>curl -u joe -kv -H "X-CSRF-Token: xxx" 'https://master.cluster.local:8443/oauth/authorize?client_id=openshift-challenging-client&response_type=token'</code></pre>
</blockquote>
Keys here are:<br />
<ul>
<li>use <code style="background-color: #d9ead3;">client_id=openshift-challenging-client</code>, otherwise <code style="background-color: #d9ead3;">400 Bad Request</code> is returned</li>
<li>use <code style="background-color: #d9ead3;">response_type=token</code> </li>
<li>set <code style="background-color: #d9ead3;">X-CSRF-Token</code> header to some non-empty value, otherwise error is returned</li>
<li>actual token is returned in the <code style="background-color: #d9ead3;">Location</code> header of the <code style="background-color: #d9ead3;">302</code> response per the OAuth spec as <code style="background-color: #d9ead3;">access_token=VO4dAgNGLnX5MGYu_wXau8au2Rw0QAqnwq8AtrLkMfU</code></li>
</ul>
See partial curl output:<br />
<br />
<blockquote class="tr_bq">
<pre style="background-color: #d9ead3; overflow-x: scroll;"><code>< HTTP/1.1 302 Found
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Expires: Fri, 01 Jan 1990 00:00:00 GMT
< Location: https://master.cluster.local:8443/oauth/token/display#access_token=VO4dAgNGLnX5MGYu_wXau8au2Rw0QAqnwq8AtrLkMfU&expires_in=86400&token_type=bearer
< Pragma: no-cache
< Set-Cookie: ssn=MTQzNjM3NzI4NXxDSkxSTl8yb0ZjUmZaSDZwNG51UjNDZEx1M29xRldQNGtGZTMwbnhfYlNRV2FuVmYxVHlKSWhWazVKWjR2RDc3X056ZVpqZXl6VWN4T0Nqc1dyX01raDhiUlNSdXFpdkhDalAwWDQzNWdyWExlTmNTUURjN3pQeW9HT1RpVmRtQ1JBPT18qi62Db0PolIHaMmAjtdKPejhCGRY-EUEruT6W_Du2bg=; Path=/; Expires=Wed, 08 Jul 2015 18:41:25 UTC; Max-Age=3600; HttpOnly; Secure
< Date: Wed, 08 Jul 2015 17:41:25 GMT
< Content-Length: 0
< Content-Type: text/plain; charset=utf-8</code></pre>
</blockquote>
Now lets perform a real API call to remove the token we just obtained:<br />
<blockquote class="tr_bq">
<pre style="background-color: #d9ead3; overflow-x: scroll;">curl -vk -H "Authorization: Bearer VO4dAgNGLnX5MGYu_wXau8au2Rw0QAqnwq8AtrLkMfU" https://master.cluster.local:8443/oapi/v1/oauthaccesstokens/VO4dAgNGLnX5MGYu_wXau8au2Rw0QAqnwq8AtrLkMfU -X DELETE</pre>
</blockquote>
Well done!<br />
<br />
Notice how we pass the token - using header: <br />
<blockquote class="tr_bq">
<pre style="background-color: #d9ead3; overflow-x: scroll;">"Authorization: Bearer VO4dAgNGLnX5MGYu_wXau8au2Rw0QAqnwq8AtrLkMfU"</pre>
</blockquote>
<br />
You might have noticed API endpoint is /oapi/v1/something, where "v1" is the version of the API. How do we get server supported API versions? 2 ways so far:<br />
<blockquote class="tr_bq" style="background-color: #d9ead3;">
$ curl -k https://master.cluster.local:8443/<br />
{<br />
"paths": [<br />
"/api",<br />
"/api/v1beta3",<br />
"/api/v1",<br />
"/controllers",<br />
"/healthz",<br />
"/healthz/ping",<br />
"/logs/",<br />
"/metrics",<br />
"/ready",<br />
"/osapi",<br />
"/osapi/v1beta3",<br />
"/oapi",<br />
"/oapi/v1",<br />
"/swaggerapi/"<br />
]</blockquote>
Looking specifically at kubernetes API endpoint:<br />
<blockquote class="tr_bq" style="background-color: #d9ead3;">
$ curl -vk https://master.cluster.local:8443/api<br />
{<br />
"versions": [<br />
"v1beta3",<br />
"v1"<br />
]</blockquote>
Well forgot to mention. OpenShift v3 does give you access to the plain kubernetes REST API as well to the OpenShift REST API, because OpenShift is built atop of kubernetes. Not sure if versions of those two will be in sync forever but for the time being supported kubernetes and OpenShift API versions do match.<br />
<br />
I'm planning to go through most interesting API calls in another post. So far you can try looking at existing documentation:<br />
<ul>
<li><a href="https://docs.openshift.org/latest/rest_api/openshift_v1.html">https://docs.openshift.org/latest/rest_api/openshift_v1.html</a> - OpenShift Rest API documentation</li>
<li><a href="https://docs.openshift.org/latest/rest_api/kubernetes_v1.html">https://docs.openshift.org/latest/rest_api/kubernetes_v1.html</a> - Kubernetes API documentation</li>
<li><a href="https://docs.openshift.org/latest/architecture/additional_concepts/authentication.html">https://docs.openshift.org/latest/architecture/additional_concepts/authentication.html</a> - authentication documentation</li>
<li><a href="https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/other_api_objects.html">https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/other_api_objects.html</a> - you can read here about OAuth objects</li>
</ul>
Update (June 2016): You can run <code style="background-color: #d9ead3;">`oc ... --loglevel=8`</code> to see exact calls made by the oc utility. It's priceless to understand how an API call works. Keep in mind the output exposes your token.<br />
<ul></ul>
Thanks a lot to <a href="https://github.com/liggitt">@liggit</a>, without whom I wouldn't be able to write the above! Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com6tag:blogger.com,1999:blog-5951025555891494293.post-12228578678194281042014-12-15T20:54:00.001+02:002014-12-15T20:54:47.517+02:00Firefox OS on Mozilla developers' phone Firefox Flame ReviewThis is my subjective view on the Mozilla Firefox flame device
running Firefox OS 2.x nightlies. I've been using it as my main phone
device for maybe 3 months already and I think that's enough time so that
any critique is not caused by being used to other devices but real
indication of user experience.<br />
<br />
First I have a nice impression from the design of the phone. It's really nice to hold it and look at it.<br />
<br />
Also
I'm very impressed with the battery life. With lite usage it works for
me maybe 5 days long in the city with good network coverage.<br />
<br />
It has a very sane user interface that I immediately got used to.<br />
<br />
But here are the pain points I see.<br />
<br />
Foremost
is the presence of only one button. Having used android before I would
say that I still often wish I had the back button and the menu/settings
button. Why lose the real estate of the device with putting only one
button and all this empty space around?<br />
I see full screen apps
usually put a tiny arrow at the bottom of the screen but it is hard to
spot and hard to tap on with the touchscreen. Back button is often
missing in apps and it's hard to go back (one has to resort to the home
button and then navigate back to the desired place.<br />
If somebody from mozilla cares, then please reconsider adding the other two buttons.<br />
<br />
The
OS has a very nice feature - hold home button and you can select
applications and close them (unlike android where you can't close an
app). There is a lot of space on the screen but the X-button is so tiny
that I always have troubles tapping on it (maybe I should file a bug
report for that).<br />
<br />
Another nifty feature is sliding from
one app to another. But this feature interferes with sliding feature of
app itself. For example when I try to view previous/next image in
galery app, I often change the app itself. IMO would be better to switch
apps only when sliding on particular location of the screen like the
top 1/3 or 1/4 part of the screen. Because in an app one would rarely
need to slide that high.<br />
<br />
Another nice feature is holding
home button to see list of apps and have ability to close apps. Very
nice if the "x" was a little larger so I can easily tap on it. <br />
<br />
Time...
very strange bu my phone often hangs. I reboot it but them time goes
totally wrong. I need to disable/reenable automatic time for it to be
resynced. Very confusing at times.<br />
<br />Contacts:<br />Not
sure why every time I try to sync contacts with google, I need to enter
password again and the two factor code. Sync is only one way also. <br />
Renaming of contact is not detected and I end up with two contacts. Finding duplicates didn't catch that as well.<br />
Also
too bad the contact app does not show anything within braces like John
(the good boy). I see only John in the list and I can't differentiate
between the Johns until I open the contact.<br />
Another thing is that
app does not search through all fields in contacts. I often have Name
and phonetic name of contacts. It's too bad I have to search twice -
with native name and phonetic name to find a contact.<br />
One other
contact deficiency is that it only downloads contacts from "my contact".
I used to have some other groups to select what I want inside the phone
but had to put all contacts under "My contacts" so they are synced to
the phone. <br />
<br />
email:<br />
lack of categories and
lack of ability to do anything with attachments whatsoever. Not with the
app and not by accessing gmail with browser. Actually accessing through
browser was mostly broken last time I've tried.<br />
<br />
There's
some issue with charging battery over USB from a standby laptop. It
works with an android phone... I guess that might be a HW quirk. <br />
<br />
Keyboard:<br />
I'd only say rather inconvenient compared to android.<br /><br />It
turned out into a rant but really Firefox OS needs a lot of polish
until it's a pleasure to work with. So far I'm still with it for the
foreseeable future.Aleksandar Kostadinovhttp://www.blogger.com/profile/07448621230581407193noreply@blogger.com0