Learning Nihongo

All

I think that’s was, at the moment, the most difficult language I’ve studied – by a considerably margin.

Even using Quizlet and Ankiweb, and tutoring classes. The alphabets are not easy.

After 2x or 3x studying more Hiragana/Katagana than the entire time Ive studied the Cyrillic alphabet (solid 3x more), I started to memorize the actual letters. And then the verbs, wow.

The tip is to keep the pace and study everyday and don’t give up. After six months of study I did a considerable improvement in my pronunciation and understanding.

I can only be grateful for my sensei, Caio (email caiounb.jap@gmail.com) who helps me considerably this path. I would recommend the services of Mirai school, they have teacher that speak English as well.

After a few years(!) of pandemic, I cannot wait to board my plane to Tokyo and visit them

CFS, Milli cores and CPU metrics

All

Playing with OCP (with large projects) we see the important to set the adequate number resource memory to the application, java == jvm == planning for nominal and spike usage of memory. But less spoke, maybe in JVM but also very important is the cpu resources. Basically each container running on a node consumes compute resources, and setting/adding/increasing the number of threads is easy as long as we take in consideration the container limitations in terms of cpu. compute resources == resources (memory and cpu).

spec:
  containers:
  - name: app
    image: images.my-company.example/app:v4
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

Planning your application/environment

It is about planning the application – if you know that your app will eat let’s say 2CPUs, then you add requests to 2000m milicores to the application == 2k millicores = 2 cores (1/5 of a core would be 200m and 1 core == 1k). Taking in consideration that requests = what application wants at start and normal run and limits is when you reach the threshold, the kernel will kill the process with OOM. ). Knowing that the application should not exceed more than 3CPUs, you will add limit = 3000m = 3k millicores = 3 cores. Planning for nominal usage but also for high spikes and corner (out-liners) utilization. In kubernetes 0.5 core == 500m == half core.

Requests does not mean necessarily usage

Setting requests = 2000m does not mean it will use those 2 CPUs. It can start with a lower amount, let’s say 500m and it will keep growing. Think requests are, what is the normal amount of resources that the application will use. Basically it to increase load on CPU and memory – you need to make sure you have enough resources to play around (on the limits and on the host as well).

Throttling

Well, in the case a container attempts to use more than the specified limit, the system will throttle the container – hold it off. Basically allowing your container to have a consistent level of service independent of the number of pods scheduled to the node. On the thread cpu image you see on the console you will see a plato /—-\ before a decrease. Basically the quota/period

Quotas vs Complete Fair Scheduler

Bringing back some knowledge from Dorsal Lab (listening Blonde) in Montreal and studying in the Linux kernel and preemption processes basically Kubernetes uses the so well known (Completely Fair Scheduler) CFS quota to enforce CPU limits on pod containers, and the quotas force the preemption exactly like the Linux Kernel 🙂 . This explains a lot how does the CPU Manager work with more details.

There are some recommendations to not set cpus limit for applications/pods that shouldn’t be throttled. But I would just set a very high limit 🙂

Podman | Thanks for the accesses

All

For all the posts I’ve created the last couples of weeks I’ve seen people accessing this blog from all over the world, from Norway, China, Singapore, Vietnam, Switzerland (yes, I’ve wrote this in one go and I got it right). So thanks for your access from all parts of the world.

With 3 years at Red Hat working with java, python, c++ and about 12y on this IT road (since 2009), as my Nihongo Sensei says the more I learn the more I realize I know. But I’m very glad to help all developers out there.

For today we will talk about build images with podman with a dockerfile

The dockerfile has this simple syntax, pretty much where, copy and run. The trick hat with the dockerfile is that it is very simple to use and it is at the same time very powerful:

Example of build with podman and dockerfile, some dockerfiles best practices can be found here

# syntax=docker/dockerfile:1
FROM node:12-alpine
RUN apk add --no-cache python3 g++ make
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]

Context directory

Interestingly although I’m using podman 1.6.4, I still need to set the context directory of the build, which shouldn’t be the case.

$ sudo podman build -t Dockerfile . <-------- don't forget the .
STEP 1: FROM registry.redhat.io/datagrid/datagrid-8-rhel8:1.2
Error: error creating build container: Error initializing source docker://registry.redhat.io/datagrid/datagrid-8-rhel8:1.2: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication