-
Notifications
You must be signed in to change notification settings - Fork 181
Java Development Environment
Before it is possible to start working on the code, the following preparations need to be done:
Install sumaform and start a virtual machine with the current version of Uyuni.
There are instructions if you don't want to use Terraform but those are not recommended for general development.
-
Clone the repository:
git clone [email protected]:uyuni-project/uyuni.git git checkout <development_branch_name>
Where <development_branch_name>
will be master
by default.
To create a new topic branch and switch to it use the following command:
git checkout -b master-<topic_name>
Use my-great-feature
as the topic name if you work on a feature, or include a reference for bugs (like bsc1234567
).
Install Java, ssh, rsync, ivy, ant, junit-ant and obs-to-maven. On openSUSE you can:
sudo zypper in java-17-openjdk-devel openssh rsync apache-ivy ant ant-junit5 servletapi5 cpio
obs-to-maven
can be found in systemsmanagement:Uyuni:Utils. Add the repository for your distribution and then install this package.
-
For SUSE related systems, install the package with the zypper command:
sudo zypper addrepo obs://systemsmanagement:Uyuni:Utils systemsmanagement:uyuni:utils sudo zypper install obs-to-maven
-
For other systems, use your preferred package manager. Full list of supported systems.
Get the project dependencies from the internal Ivy repository:
cd <path_to_spacewalk>/java
ant -f manager-build.xml ivy
Compile a first version of the branding jar:
ant -f manager-build.xml refresh-branding-jar
When you get an error message like Unable to find a javac compiler;
, then you probably need to set the environment variable $JAVA_HOME
to the path of the Java 17 OpenJDK libraries. For example:
export JAVA_HOME=/usr/lib64/jvm/java-17-openjdk
For debian systems, you need to install java-packages, additionally to the pkg listed. Take the rpm, and convert it, install it with alien https://software.opensuse.org/package/javapackages-tools
You can install Java, Python, ant, rpm, rsync via Homebrew:
brew install ant python rpm rsync
Then you can install obs-to-maven via pip:
wget https://github.com/uyuni-project/obs-to-maven/archive/refs/tags/v1.1.0.tar.gz
tar xvf v1.1.0.tar.gz
cd obs-to-maven-1.1.0
pip3.9 install .
Get the project dependencies from the internal Ivy repository:
cd <path_to_spacewalk>/java
ant -f manager-build.xml ivy
Compile a first version of the branding jar:
ant -f manager-build.xml refresh-branding-jar
Before you can deploy to a remote machine into a podman Container, you need to setup a podman remote connection.
$> ssh-copy-id [email protected]
$> podman system connection add dev ssh://[email protected]
This create a connection named dev
which can be used for mgrctl commands.
Check that the connection works, for example:
$> CONTAINER_CONNECTION=dev podman ps
If you receive an error, verify if the container host VM has SELinux enabled:
ssh [email protected] getenforce
Permissive
If the SELinux is set to Enforcing
, that might cause issues when using Podman socket remotely.
You can edit the /etc/sysconfig/selinux-policy
to permanently disable SELinux (reboot after editing the file).
Alternatively, you can disable SELinux until reboot by using:
ssh [email protected] setenforce 0
You also need to install mgrctl
from Uyuni Container Utils:
$> zypper ar -f https://download.opensuse.org/repositories/systemsmanagement:/Uyuni:/Stable:/ContainerUtils/openSUSE_Leap_15.5/ uyuni-container-utils
$> zypper in mgrctl
Check https://download.opensuse.org/repositories/systemsmanagement:/Uyuni:/Stable:/ContainerUtils/ for other OSes
Before deploy to test server, run ant checkstyle
cd <path_to_spacewalk>/java
ant -f manager-build.xml checkstyle
Then, you can deploy Java code with:
$> CONTAINER_CONNECTION=dev ant -f manager-build.xml refresh-branding-jar deploy-restart-container
Running that will compile the sources, generate a build/webapp directory, copy it over to the virtual machine, make some additional configuration and restart relevant services.
You might want to monitor different logfiles. To do that you need to open a terminal connection to the container:
$> CONTAINER_CONNECTION=dev mgrctl term
CONTAINER> tail -f /var/log/rhn/rhn_web_ui.log
The following logfiles might be of interrest as well:
tail -f /var/log/tomcat6/catalina.out
tail -f /var/log/rhn/rhn_taskomatic_daemon.log
tail -f /var/log/apache2/error_log
tail -f /var/log/messages
Make sure your server has dev_only enabled (sumaform) otherwise it will not work. You dont need to build java if you make a modification only on css/frontend code.
All Frontend Prerequisites should be checked before deploy.
$> cd spacewalk/java
$> CONTAINER_CONNECTION=dev ant -f manager-build.xml refresh-branding-jar deploy-static-resources-container
If you have Javascript/Node.js errors:
$> cd spacewalk
$> yarn install
When you have changes in the salt states, grains, modules, beacons, etc. in susemanager-utils/susemanager-sls/ you can deploy them also directly into the container.
$> cd spacewalk/java
$> CONTAINER_CONNECTION=dev ant -f manager-build.xml deploy-salt-files-container
Before deploy to test server, run ant checkstyle
cd <path_to_spacewalk>/java
ant -f manager-build.xml checkstyle
Then, you can deploy Java code with:
ant -f manager-build.xml refresh-branding-jar deploy -Ddeploy.host=suma3pg.tf.local restart-tomcat restart-taskomatic
Running that will compile the sources, generate a build/webapp directory, copy it over to the virtual machine, make some additional configuration and restart relevant services.
You might want to monitor different logfiles doing something like that (in separate xterms):
tail -f /var/log/tomcat6/catalina.out
tail -f /var/log/rhn/rhn_taskomatic_daemon.log
tail -f /var/log/apache2/error_log
tail -f /var/log/messages
Make sure your server has dev_only enabled (sumaform) otherwise it will not work. You dont need to build java if you make a modification only on css/frontend code.
All Frontend Prerequisites should be checked before deploy.
cd spacewalk/java
ant -f manager-build.xml refresh-branding-jar deploy-static-resources -Ddeploy.host=mysuma-devhead.tf.local
If you have Javascript/Node.js errors:
cd spacewalk
yarn install
JUnit tests get automatically triggered whenever there are commits in the Manager
or Manager-X.Y
branches. Currently we run unit tests on each PR submitted on spacewalk. ( This are like travis notification, and run on our private jenkins with special containers)
You can also run JUnit tests can on your local machine but need an external database (Oracle or Postgres) to be carried out. (Or you use the container) Tests are expected to cleanup after they finish, so they should not change the database state (except from sequence numbers) during normal operations. Nevertheless, bugs and failed runs can insert random stuff, so you are advised to have a snapshot ready in case you have to roll back your database virtual machine.
IMPORTANT: You can easily start a test database by running the following command make -f Makefile.docker dockerrun_pg
on java folder. This requires docker.
Some hints for docker:
- after install docker verify if docker agent is running
sudo systemctl status docker
- your user should be in docker group, to avoid runningit as sudo (may require to restart the machine): sudo usermod -aG docker $USER
- in file '/etc/docker/daemon.json' (create if not exists) add the following property to be able to connect to suse docker images:
"insecure-registries": ["registry.mgr.suse.de"]
Instructions to run tests:
- configure an
rhn.conf
file inbuildconf/test
. JUnit-specific minimal samples are provided for Oracle and Postgres in the same directory. - create the directory
/usr/share/rhn/config-defaults
. - create the directory
/var/log/rhn/
. - create the directory
/srv/susemanager
with correct privileges in order to let the user, the one who runs the tests, can write temporary files there. - optionally, decide a subset of tests to include, a subset of tests to exclude, or both using globs in
buildconf/manager-test-excludes
andbuildconf/manager-test-includes
. By default all tests are included, except those that are broken beyond repair; - run
ant -f manager-build.xml test-report
inside java folder. - analyze results: enter in folder
java/test-results/html
and run the commandpython3 -m http.server 8000
. Now you can openhttp://localhost:8000
in browser.
-
java.nio.file.attribute.UserPrincipalNotFoundException
Apply the following patch. - Investigate logs in
/var/log/rhn/
.
Edit java/code/src/log4j.properties and add the classes you want to get debug output in catalina.out
log4j.logger.com.redhat.rhn.frontend.action.renderers.HttpProxyRenderer=DEBUG
log4j.logger.com.redhat.rhn.manager.setup.SetupWizardManager=DEBUG
Please find instructions about how to setup logging with the DWR library [here] (http://directwebremoting.org/dwr/documentation/server/logging.html). This is especially important since DWR will by default be silent about all application exceptions! A setup with log4j as we use it in Uyuni can look like this (add this to log4j.properties
):
# DWR logging
log4j.logger.org.directwebremoting.log.accessLog=DEBUG
Further, web.xml
needs to be edited as well by e.g. adding the following init parameter to the DWR servlet definition to set a specific log level:
<init-param>
<param-name>accessLogLevel</param-name>
<param-value>EXCEPTION</param-value>
</init-param>
Alternatively it is possible to enable a general "debug" mode which will set the accessLogLevel to EXCEPTION
by default.
You can also make DWR marshal Java exceptions to Javascript for easier debugging through Firebug or similar. To do that add the following to your dwr.xml
:
<convert match="java.lang.Exception" converter="exception"/>
<convert match="java.lang.StackTraceElement" converter="bean"/>
Cucumber tests (defined in testsuite directory) are automatically triggered every few hours. A mail gets sent to the mailing list when those break.
If you want to test them out yourself, read this documentations here:
From time to time you might want to check a certain piece of code for performance reasons. The tool we typically use is the open source VisualVM.
Processor-wise, there are two kinds of analysis that you can do with VisualVM:
- sampling: VisualVM periodically checks what method the VM is executing and makes time percentage estimations based on the number of counts each method gets. This method has relatively low overhead and low accuracy;
- profiling: VisualVM keeps track of start and end timestamps for every method in the application, computes wall time estimations based on "real" time data. This method has relatively high accuracy but it adds a large overhead, so in the end it might skew results. Be careful interpreting results!
You can connect a VisualVM instance to a Java process:
- via a local process connection: VisualVM can connect to a running process on the same machine, this is the most efficient way where applicable;
- via the JMX network protocol: for a process on another machine;
Then launch VisualVM and use File -> Add JMX Connection... to connect it to <SERVER_NAME>:.
If you are using sumaform, JMX access is already configured on port 3333 for Tomcat and port 3334 for Taskomatic otherwise see the official guide for additional information.
Gotchas:
- always be sure to do your tests using the same VM vendor/version that you will be deploying to as results might be quite different depending on specific implementation/version optimizations;
- profiling can only be done from a local connection, so VisualVM must run on the same machine in that case;
- as of May 2017, profiling only works when the process being profiled runs in OpenJDK, which means changing the conditions in our case as we use IBM Java in SUSE Manager;
- local connections only see Java processes running from the same VM executable that launched VisualVM (you can change that editing config/visualvm.conf), otherwise you will not see them listed;
- local connections only see Java processes that were launched by the same user, so be sure to use that same user to run VisualVM;
- sampling has actually low overhead on openjdk 7 (typically some percent), while it is not so cheap on ibm j9 v6 (typically 200%);
- beware that all time durations and percentages from VisualVM are "wall time", not "CPU time", so they will include disk access delays, network access, etc.;
- beware that if VisualVM is slowing your process too much results can also be skewed by the very presence of the profiler. At the first run VisualVM will calibrate itself to try to compensate for that, but it won't be 100% accurate so be sure to intepret results correctly;
- VisualVM (at least the current Linux version, 1.3.5) does NOT currently profile correctly ANY thread that was started BEFORE the profiler was connected. So be sure to use a breakpoint or other mechanism in order to stop execution until you have VisualVM running, THEN start any thread you want to measure;
- VisualVM (at least the current Linux version, 1.3.5) does NOT currently work correctly unless it is launched from the bin/ directory. So
cd bin; ./visualvm
will work while./bin/visualvm
won't! - VisualVM (at least the current Linux version, 1.3.5) does NOT currently work if it is launched from IBM j9;
- VisualVM (at least the current Linux version, 1.3.5) does NOT currently support profiling of IBM j9 processes.
Uncomment the commons-lang
dependency in java/buildconf/ivy/ivy-suse.xml
.
Simply run the following within the java
folder:
ant -f manager-build.xml ivy apidoc-asciidoc
The other apidoc formats can be generated by calling one of the other ant targets: apidoc-singlepage
, apidoc-jsp
, apidoc-list
, apidoc-html
or apidoc-docbook
.
To validate the generated apidoc, the apidoc-validate
target can be used. It calls the apidoc-docbook
target to generate the doc, then runs xmllint
to validate it.
ant -f manager-build.xml apidoc-validate
To get more debug infos from the template processor, add the -debug
parameter to the javadoc
additionalparam
attribute in java/manager-build.xml
and call the ant target again.