I think that’s was, at the moment, the most difficult language I’ve studied – by a considerably margin.
Even using Quizlet and Ankiweb, and tutoring classes. The alphabets are not easy.
After 2x or 3x studying more Hiragana/Katagana than the entire time Ive studied the Cyrillic alphabet (solid 3x more), I started to memorize the actual letters. And then the verbs, wow.
The tip is to keep the pace and study everyday and don’t give up. After six months of study I did a considerable improvement in my pronunciation and understanding.
I can only be grateful for my sensei, Caio (email email@example.com) who helps me considerably this path. I would recommend the services of Mirai school, they have teacher that speak English as well.
After a few years(!) of pandemic, I cannot wait to board my plane to Tokyo and visit them
Playing with OCP (with large projects) we see the important to set the adequate number resource memory to the application, java == jvm == planning for nominal and spike usage of memory. But less spoke, maybe in JVM but also very important is the cpu resources. Basically each container running on a node consumes compute resources, and setting/adding/increasing the number of threads is easy as long as we take in consideration the container limitations in terms of cpu. compute resources == resources (memory and cpu).
It is about planning the application – if you know that your app will eat let’s say 2CPUs, then you add requests to 2000m milicores to the application == 2k millicores = 2 cores (1/5 of a core would be 200m and 1 core == 1k). Taking in consideration that requests = what application wants at start and normal run and limits is when you reach the threshold, the kernel will kill the process with OOM. ). Knowing that the application should not exceed more than 3CPUs, you will add limit = 3000m = 3k millicores = 3 cores. Planning for nominal usage but also for high spikes and corner (out-liners) utilization. In kubernetes 0.5 core == 500m == half core.
Requests does not mean necessarily usage
Setting requests = 2000m does not mean it will use those 2 CPUs. It can start with a lower amount, let’s say 500m and it will keep growing. Think requests are, what is the normal amount of resources that the application will use. Basically it to increase load on CPU and memory – you need to make sure you have enough resources to play around (on the limits and on the host as well).
Well, in the case a container attempts to use more than the specified limit, the system will throttle the container – hold it off. Basically allowing your container to have a consistent level of service independent of the number of pods scheduled to the node. On the thread cpu image you see on the console you will see a plato /—-\ before a decrease. Basically the quota/period
Quotas vs Complete Fair Scheduler
Bringing back some knowledge from Dorsal Lab (listening Blonde) in Montreal and studying in the Linux kernel and preemption processes basically Kubernetes uses the so well known (Completely Fair Scheduler) CFS quota to enforce CPU limits on pod containers, and the quotas force the preemption exactly like the Linux Kernel 🙂 . This explains a lot how does the CPU Manager work with more details.
There are some recommendations to not set cpus limit for applications/pods that shouldn’t be throttled. But I would just set a very high limit 🙂
For all the posts I’ve created the last couples of weeks I’ve seen people accessing this blog from all over the world, from Norway, China, Singapore, Vietnam, Switzerland (yes, I’ve wrote this in one go and I got it right). So thanks for your access. With 3 years at Red Hat working with java, python, c++ and about 12y on this IT road (since 2009), as my Nihongo Sensei says the more I learn the more I realize I know. But I’m very glad to help all developers out there.
The dockerfile has this simple syntax, pretty much where, copy and run. The trick hat with the dockerfile is that it is very simple to use and it is at the same time very powerful:
Example of build with podman and dockerfile, some dockerfiles best practices can be found here
RUN apk add --no-cache python3 g++ make
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
Interestingly although I’m using podman 1.6.4, I still need to set the context directory of the build, which shouldn’t be the case.
$ sudo podman build -t Dockerfile . <-------- don't forget the .
STEP 1: FROM registry.redhat.io/datagrid/datagrid-8-rhel8:1.2
Error: error creating build container: Error initializing source docker://registry.redhat.io/datagrid/datagrid-8-rhel8:1.2: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication
Going deep on my investigations on Shenandoah there is a world of things to learn. What is interesting is that the more you learn about a GC specifically, the more you learn form the others (as consequence of the associations and correlations your mind can make). Just like taking a book on Russian Rockets (like the N1) only and at some point you will understand much simpler, and faster Spacex’s Merlin/Raptor combustion rations and stuff. Anyways.
Reviewing the videos and the actual original paper Shenandoah’s paper Shenandoah an Open source concurrent compacting garbage collector for OpenJDK, it is very interesting that they solve the problem of accessing with basically two concepts: SATB (Snapshot at the beginning, which is also used in CMS and G1) and Forwarding pointer (which adds one more word to the two already there in OpenJDK).
One of those details that I’ve understand much better learning from Shenandoah was the importance of the compaction to avoid fragmentation, which actually happens to CMS (dep in JDK 11, and bye bye in JDK 17). Learning concurrent gc helps learn STW gc <– ofc, all of them are GCs at the end of the day.
Concurrent compaction is a very big part on Shenandoah algorithm as it is one of the three main concurrent phases (other three are respectively mark – given by the init mark|final mark STW – evaluation and updating references.
Roman Kennke has been a very good mentor, and setting up stones for me to learn considerably. But also the PhD Thesis from Paul Thomas, which introduce several concepts like STAB vs Incremental Updates, and Barriers, like write/read barriers and so on. Of course forwarding pointer and how it changed from Shenandoah1 vs Shenandoah2.
I’ve been working with the Knowledge Centered Services (writing solutions and articles) since 2018 so 3y+ now. I think it is one of those ideas it worth sharing. It is considerable easily to write solutions and some principles and practices. From solve solve loop: capture, structure, reuse, improve. To the evolve loop: content health, process integration, performance assessment and leadership and communication.
And it is very complementary to ITIL . Since 2014 I have experience with ITIL v3 course, but only later, 2018 is that I learned about KCS methodology. As much as knowledge is part of the workflow, not necessarily treated as the main part of it. I really don’t know how was IT/IT services/IT support without both of those methodologies.
The fact is I always had more fun to easily pass the construction of a natural procedure, which will happily be replicated, instead of a imposition in the workflow. The created solution/article will benefit not only the creator but other valid analysts and customers. That’s an easy sell.
Some principles though, are very important and can be applied every where, like defining/understanding the problem before solving – this one can be applied in almost anything in life/work. But another one I learned, is that more you write, the more you will like it. I think that’s was a big change on this blog on the last couple of years. The more I would write solutions (and I wrote more than 400 of them), the more this blog got organized and simple to read.
Interestingly, on the official website, KCS v6 Practice Guide – Technique 4.1: Reuse is Review, 80% of the problems are solved by 20% of the solution, I will disagree with that. Most of the solutions I have seen I re-used/edit somehow. Even if it was just once, the problem sometimes will help clarify another one in the future. I’d say only 20% or less of the solutions would be seldom to rarely used, but the site might imply a sense of useless, which is not the case.
Finally, I would suggest getting certification for KCS from them, it is very interesting and pushes you to learn more, at least the KCS Fundamental certification.
For the java followers, which have been seeing the java language evolve the last couple of years remember the big difference between JDK 7 vs JDK 8 when the metaspace was “created” as in perm gen was removed from heap. And this change made much sense, in the sense that loading information from the classes are set to be outside of the heap. Basically we have on JDK 8+ class metadata are allocated when classes are loaded in native.
However, the main impact we have seen in some scenarios, was when applications with high number class loading occupy a considerable (as in non trivial) amount of space on the memory, but now on the native side. And it was like that for quite awhile, the technical reason is that you have each class loader taking a part of the metachunks and occupying a space on its arena – the more class loaders, the more space it takes – i.e. per-class loader areas. This space would only grow with time, the more class loaders and classes are loaded.
That’s not the case anymore, and now with Elastic Metaspace, which basically introduces mechanisms of uncommitting the memory back to the OS – and commit memory only on demand (seriously, before it was just a big commited space without usage), i.e. allocator will attempt to return memory to the OS after class unloading, in a much more eagerly than it did before Elastic Metaspace, which is implemented via the Buddy Memory allocation algorithm.
And of course, more JVM options in Java means usually mean new flags, and that’s why the new JVM flag MetaspaceReclaimStrategy was introduced the flag has three options: none, balanced, and aggressive, as described on JDK-8242841. Balanced is the default.
shall enable a reasonable compromise between reclaiming memory and the incurred costs. The majority of applications should see improved memory usage after class unloading while not noticing added costs. In particular, the number of virtual memory areas should only marginally increase. This setting aims to deliver backward compatible behavior while still giving the benefit of reduced memory usage.
shall enable more aggressive memory reclamation at possibly increased cost. Choosing this setting means the user would be willing to increase the OS specific limit on virtual memory areas if needed. However, “aggressive” should still be a perfectly viable setting.
disable memory reclamation
The consequence of the previous implementation – reserve/commit a big chunk even when not used – is that you can use more than needed and reclaiming is not an option, aka non Elastic Metaspace. If you would would like to make it this way, just set MetaspaceReclaimStrategy=none.
The new implementation, i.e. Elastic Metaspace, the tradeoff will be:
1. Losing cpu cycles in the uncommitting process ~ meaning CPU overhead.
2. uncomitting memory may increase the number of virtual memory areas (memory mappings) ~ meaning (virtual) memory overhead. Use `ulimit -v` to see the current limit. From what I searched the virtual limit would be 2^48 for 64 bits ~ still learning about it though.
Complete description of the algorithm here and overview on the JEP 387
I’ll be doing some blog posts about the features above.
Also, in terms of overhead, the heap/native size overhead will get smaller because of the Elastic Metaspace, instead of a static. That is very interesting and certainly, I will do some benchmarks to see the comparison.
There was much stuff removal/deprecated as well, like CMS, Nashorn, RMI activation.
It is important to highlight here that ZGC, the ultra fast GC from Oracle – similar to Shenandoah in terms of speed – is now production ready as well, I think I already did one or two blog posts about it. Shenandoah is production ready in JDK 8 and JDK 11, since it was back-ported to JDK 8.
Previously I have done some posts about Shenandoah, and now it is time to talk about Oracle’s ZGC.
ZGC has an algorithm similar to Shenandoah (in terms of concurrence GC) only it uses coloured pointers to mark the objects, and that demands 64bits, so it doesn’t have 32bits of ZGC, necessary for the reference colouring. In comparison to Shenandoah’s additional one level of abstraction (forwarding pointer) that adds one words (+the other two already in OpenJDK). The tradeoff used to be the footprint, because for each object you add one more word (warning: this is true for Shenandoah1, in Shenandoah2, it uses the mark word for the forwarding pointer, so then there is no more footprint). Of course, Shenandoah it aims for very small pauses, i.e. sub-millisecond, handle of high heap and scalable pauses. It is available for JDK 11 on Windows, Linux 64, macOS and Linux/Aarch architectures.
This is a big thing is the tradeoff Oracle’s ZGC took his 64 bit implementation, so no 32bits. I don’t think ZGC has a degradation mode (or pacing mechanism) as Shenandoah does have, meaning it will not stall the threads of the application to make the GC finish the job (it will add latency as consequence, although invisible – 42:00).
Like G1GC, and much of contrary to CMS, it does not have that many flags to set as well. As Shenandoah you can play/tune it a bit, with basic initial heap size, number of concurrent GC threads, NUMA – JDK 17 feature – Huge and large pages. On some versions one can set AlwaysPreTouch.