Sunday, April 7, 2024

Zenbleed microcode update on AMD Ryzen 5 3600

 tl;dr; not worth the effort but an interesting exercise

Overview of updating microcode on Linux

  1. Figure out your CPUID and current microcode version
  2. Check for a newer version and download it
  3. Integrate it in your boot process

This example will be useful for other AMD CPUs as well. Some parts should be useful for Intel CPUs, but there is generally more information about them elsewhere anyway.

Getting the CPUID

I couldn't figure out a nice way to figure out your CPUID. Initially I was confused by the output of `dmidecode` for the "Processor" section. Where "ID" read as "10 0F 87 00 FF FB 8B 17". It turned out though that it has nothing to do with the actual CPUID that is needed to obtain and upload microcode.

What I did was to open http://instlatx64.atw.hu/ which was linked from MC Extractor README and find my CPU by model name "Ryzen 5 3600". So I saw the proper CPUID. MC Extractor is used to extract and analyze microcode from BIOS images and other sources so it is an interesting project to explore further.

In this case the CPUID is "00870F10". After the fact, I could figure out that this ID can also be obtained by `cpuid -r` looking at the `eax` register at `0x00000001` and `0x80000001`. But I can't say that this is universal. You can check more details about the CPUID instruction itself.

Getting current microcode version

This must be much more straightforward. Current microcode you can see in the BIOS settings, `dmidecode`, `/proc/cpuinfo`, `cpuid` (on windows hwinfo and CPUz showed it too). It was 8701030 for me (motherboard has AGESA 1.2.0.B, while the zenbleed fix should be in 1.2.0.C, see AMD bulletin).

Also with `dmesg`

```
$ sudo dmesg | grep -i microcode
[    0.431255] Zenbleed: please update your microcode for the most optimal fix
[    1.127054] microcode: Current revision: 0x08701030

```

Finding a newer version

You can look at the excellent CPUMicrocodes repo. In the `AMD` folder you can search for files matching your CPUID. I found and donloaded this one cpu00870F10_ver08701033_2023-10-06_E71C3D44.bin

It is not in a format understandable by Linux kernel though. So it has to be packaged appropriately.

Packaging as Kernel microcode

First you need to clone and compile amd-ucodegen

```
$ git clone https://github.com/AndyLavr/amd-ucodegen.git
$ cd amd-ucodegen
$ make
$ ./amd-ucodegen -o ~/packaged-08701033.bin cpu00870F10_ver08701033_2023-10-06_E71C3D44.bin
```

Now you need to check your CPU model. for me this is 23 decimal and 17 hex. So I have to either integrate this file into the existing `/usr/lib/firmware/amd-ucode/microcode_amd_fam17h.bin` or just overwrite it. For completeness, lets see how to integrate into it.

```
$ git clone https://github.com/AMDESE/amd_ucode_info.git
$ sudo mv /usr/lib/firmware/amd-ucode/microcode_amd_fam17h.bin /usr/lib/firmware/amd-ucode/microcode_amd_fam17h.bin_bak
$ sudo python amd_ucode_info/amd_ucode_info.py -m /usr/lib/firmware/amd-ucode/microcode_amd_fam17h.bin /usr/lib/firmware/amd-ucode/microcode_amd_fam17h.bin_bak packaged-08701033.bin
```

Verify operation with
```
$ python amd_ucode_info/amd_ucode_info.py /usr/lib/firmware/amd-ucode/microcode_amd_fam17h.bin
Microcode patches in /usr/lib/firmware/amd-ucode/microcode_amd_fam17h.bin:
  Family=0x17 Model=0x08 Stepping=0x02: Patch=0x0800820d Length=3200 bytes
  Family=0x17 Model=0x31 Stepping=0x00: Patch=0x0830107b Length=3200 bytes
  Family=0x17 Model=0xa0 Stepping=0x00: Patch=0x08a00008 Length=3200 bytes
  Family=0x17 Model=0x01 Stepping=0x02: Patch=0x0800126e Length=3200 bytes
  Family=0x17 Model=0x71 Stepping=0x00: Patch=0x08701033 Length=3200 bytes

```

Integrate it in your boot process

Different linux distributions use different approaches to install early microcode. You can check archwiki for information. On Fedora 39 I had to just call `dracut -f` and the magic was done.

btw an alternative to all this is to integrate the original raw microcode into your mainboard BIOS image. There are some howtos about it. But I think it is more dangerous than loading on boot.

Verifying the result

```
$ sudo dmesg | grep -i microcode
[    1.126852] microcode: Current revision: 0x08701033
[    1.126854] microcode
: Updated early from: 0x08701030
```

You can see the difference with the previous call. No mention of Zenbleed anymore.

Performance effect

The performance effect is hard to measure and is prone to statistical error. But it appears that single-core performance is slightly better while multi-core performance is slightly worse. The penalty in multi-thread is slightly worse than the positive effect in single-thread performance.

See my scientific Geekbench results:

Obviously things depend on your workload. Biggest margin has

  • Photo filter ~ 6%
  • Text and PDF processing ~ 4%


HTH

Tuesday, March 7, 2023

Reading downloaded logs from Quay.io

Quay is the container registry service sponsored by Red Hat inc. based on the projectquay.io free/open source project.

It supports also building images. One issue though is that logs are downloaded in a custom JSON format.

A simple example is:

> {"logs":[{"data":{"datetime":"2023-03-07 15:19:36.159268"},"message":"build-scheduled","type":"phase"}]}


In this short post I give you a very simple way to read those logs in terminal:

> jq -r '.logs[].message' < /tmp/98afc879-8ef1-4cc6-9425-cf5e77712a5f.json

Wednesday, April 28, 2021

Updating UEFI boot record on Fedora

This is more of a personal note.

Basically grub-install is deemed unnecessary now. If you use it, you will break secure boot. To restore your boot record, you can do

sudo dnf reinstall shim-* grub2-*

then if you also need to update your grub config, which you should not need normally

Fedora 33 and older:

sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

Fedora 34 and newer:

sudo grub2-mkconfig -o /boot/grub2/grub.cfg

In case you have upgraded to Fedora 34 from an earlier version, you better use sudo rpmconf -a to restore /boot/efi/EFI/fedora/grub.cfg to the new default version.


While on it, I also learned about the  efibootmgr utility, appears interesting although I don't know what use one can have of it ¯\_(ツ)_/¯ 

Actually efibootmgr can help in case you have a messed up boot entry that doesn't actually boot to grub or whatever boot manager (or UKI image) you desire.

# list current entries
sudo efibootmgr
# remove existing entry
sudo
efibootmgr -B -b 0
# create a new entry
sudo efibootmgr --disk /dev/vnme0n1 --part 1 -L Fedora -l \EFI\fedora\grubx64.efi
# change boot order (you see current order with the first command)
sudo efibootmgr -o 0000,0001,001C,001D,001E,001F,0020,0021,0022,0023,0024,0025

Important: when performing the fixes above, make sure to use a Fedora live image or a netinst image in recovery mode lest you mess up selinux labeling and machine fails to start with Failed to mount API filesystem (as it happened to me). Then you will have to boot with enforce=0 kernel cmdline argument, then run fixfiles relabel to fix that up.

Literature:

Monday, April 26, 2021

Rsync between volumes on two different OpenShift clusters

This is a short HOWTO about rsync-ing data between 2 distinct OpenShift clusters.

You always have the option to oc rsync the data from source OpenShift cluster to your local workstation and then oc rsync from your workstation to target cluster. But if you have halt a terabyte of data you may not have enough space or it may take several days because of network bandwidth limitation.

The method I describe below avoids any such inefficiencies as well the rsync process is restarted in case some network or system glitch kills it.

It basically works by having:

  • a kubeconfig file with access to the target OpenShift cluster inside a secret on the source OpenShift cluster
  • a pod on target OpenShift cluster with target volume mounted
  • a pod on source OpenShift cluster with source volume and kubeconfig secret mountedand an entrypoint running oc rsync

 So lets start with generating a proper kubeconfig secret.

$ touch /tmp/kubeconfig
$ chmod 600 /tmp/kubeconfig
$ oc login --config=/tmp/kubeconfig # make sure to use target cluster API endpoint
$ oc project my-target-cluster-namespace --config=/tmp/kubeconfig 
Note that command below will run against source OpenShift cluster.
$ oc login # use source cluster API endpoint
$ oc create secret generic kubeconfig --from-file=config=/tmp/kubeconfig

I will assume that you have your target pod already running inside target cluster. Otherwise you can create one similar to the pod in source cluster below, just use some entrypoint command to keep it permanently running. For example /bin/sleep 1000000000000000000.

Now all we need to do is run a proper pod in source cluster to do the rsync task. Here is an example pod YAML with comments to make clear what to use in your situation:

apiVersion: v1
kind: Pod
metadata:
  name: rsync-pod
  namespace: my-namespace-on-source-cluster
spec:
  containers:
    # use client version ±1 of target OpenShift cluster version
    - image: quay.io/openshift/origin-cli:4.6
      name: rsync
      command:
      - "oc"
      args:
      - "--namespace=my-target-cluster-namespace"
      - "--kubeconfig=/run/secrets/kube/config"
      # insecure TLS is not recommended but is a quick hack to get you going
      - "--insecure-skip-tls-verify=true"
      - "rsync"
      - "--compress=true"
      - "--progress=true"
      - "--strategy=rsync"
      - "/path/to/data/dir/"
      - "target-pod-name:/path/to/data/dir/"
      volumeMounts:
        - mountPath: /path/to/data/dir
          name: source-data-volume
        - mountPath: /run/secrets/kube
          name: kubeconfig
          readOnly: true
  # restart policy will keep restarting your pod until rsync completes successfully
  restartPolicy: OnFailure
  terminationGracePeriodSeconds: 30
  volumes:
    - name: source-data-volume
      persistentVolumeClaim:
        claimName: source-persistant-volume-claim-name
    - name: kubeconfig
      secret:
        defaultMode: 420
        secretName: kubeconfig
And last needed command is to run this pod inside the source cluster:
$ oc create -f rsync-pod.yaml
Now check what state is your pod in:
$ oc describe pod rsync-pod
If it start properly, then monitor your progress:
$ oc logs -f rsync-pod