Back on DG/EAP 7 we used to use template for Source to image deployments at some point.
In fact I liked template deployment of course you need to import the templates and such, but I don’t think it is bad from a functional perspective. And the Operator is so simple, it gives much more time for the user to focus on what matters.
I think helm charts will streamline this process, instead of templates. It is pretty interesting and very flexible in some ways.
## to install
helm install <release_name> chart <flags> --> flags can be in the middle as well
## to upgrade
helm upgrade <release_name> chart <flags> --> flags can be in the middle as well
## to uninstall
helm uninstall <release_name>
It is interesting to handle the pod’s yaml directly on the statefulset/route instead of the custom resources, which is the default by the operator. It is a sense of a flexibility and responsibility as well, given there is much more room for the user to screw up.
I love Avatar, both of them, great movie. Made by a Canadian. Astonishing images. Back in year ago when I was released, but still have impressive images. Pandora is amazing.
However, the second has one main problem: it is 100% plot driven – not character driven. That’s not a spoiler, but be aware that the characters won’t make hard choices and set up a path for adventures. It is pretty much the opposite, the plot happens despite the characters’ choice.
Basically there is only one character that drive the plot: the whale and maybe the “bad guys” as well and one/two difficult choices. No big dilemma or little components that make the character choices critical ending.
The movie is not bad though, but probably on the Avatar series it won’t be the top 1st best.
At some point I gave up the plot and only was seeing the movie, like I used to watch Windows media player would create just for the music.
I mean arguably, one can image several scenarios and how each character should behave. And the end result may not even be better than the final result. But changing some part of the script, not 100% the fundamental here, the result would be 2x better and the characters would be much more relatable. Because we relate with characters that make choices, even bad ones.
But the explanation is likely because there will be several movies, so this one is just setting up some stones so then the next movies there will be some dilemmas and hard choices.
Working with Openshift in a daily basis, I come with several situations where the pod crashes. Given my background on java, I will talk about java here.
Let me list a few situations and the next steps
pods crash with OOME
The java process uses more heap than it was suppose – it would generate a heap on OOME exception – but it might exit given: ExitOnOutOfMemoryError
pods crash with OOME killer
Verify OCP node dmesg and verify if there is OOME Killer messages
How to know why my pod is crashing?
Let’s pretend you don’t know why the java pod (java pod here == pod with one container that is java). The first would be to see if the pod is OOME (in the JVM) or suffering from OOME-killer.
OOME will be handled by the JVM itself however, because the containers usually have ExitOnOOME so then the container will exit, which will prompt the orchestrator to respawn new pods given a certain timeout period.
For OOME Killers, this is an external agent (OCP node, or the cgroups) acting out and affecting the container to finish it up given a certain condition. Like lack of resources if the OCP (kubelet) needs to spawn a certain pod but doesn’t have resources, so it might just terminate the QoS best efforts ones over spawning Guaranteed pods.
Or that can be a native allocation breaching the cgroups limitations and causing the container to exit, by being killed.
Complementary to getting the jcmd VM.native_memory, the jcmd command VM.info, which I discussed a few times on this blog can be an awesome tool for investigating (native) leaks. This feature requires 8u222 or later if I’m not mistaken.
In fact, for containers, I would just get jcmd VM.info directly, which in fact has the jcmd VM.native_memory. So VM.info can, easily be used instead – Native memory info will be on the VM.info.
jcmd PID VM.info
VM.info will show detailed summary of the VM and the OS, including the native details and shared libraries – the native details will only show if Native Memory Tracking flags: -XX:NativeMemoryTracking=summary or XX:NativeMemoryTracking=details are there. Otherwise VM.info won’t display this section – but other sections will be there regardless.