Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Topic sk uptake development #247

Open
wants to merge 30 commits into
base: main
Choose a base branch
from
Open

Conversation

skommala
Copy link
Member

Uptake development branch

skommala and others added 30 commits July 27, 2023 10:27
Fix the connection string provisioning issues

1. Created JRF instance with rac database without providing db
connecting string.
2. Created JRF instance with rac database with providing db connect
string.
Fixed bug Certificate on LB is reset in scaling operation.
Test
-----

    Create a single VM instance with load balancer.
    Create a new certificate to the load balancer.
Note that you can generate a keypair from any OCI compute instance with
'openssl req -new -sha256 -newkey rsa:2048 -nodes -keyout testssl.key
-x509 -days 365 -out testssl.crt'
Note that you'll need to add both the certificate and private key pems
that are generated to the certificate.
Associate the certificate you added to the listener for the load
balancer.
Delete the demo certificate that was generated by the Stack from the
load balancer.
    Edit the stack and add a node.
    Make sure that new certificate is assigned to the listener.
    Demo certificate will be recreated but not assigned to any.

The certificate created and added by a customer is not reset, but will
still restore the cert created by terraform if it is deleted.
Uptake 23.3.2 marketplace values.
Implement -
[JCS-14015](https://jira.oraclecorp.com/jira/browse/JCS-14015) - Bug -
load-balancer policy required for instance creation

Created provisioning instance without load balancer and non admin user. 
Created provisioning instance with load balancer and non admin user.

---------

Co-authored-by: Abhijit Paranjpe <[email protected]>
#225)

Bug - Fail to get password expiry date for OPSS user when using connect
string

Note that since the DB service name is not guaranteed to include the PDB
name ( I proved this by using a connect string w/o the PDB name in it to
successfully create a WLS for OCI instance). Therefore, the PDB name
must be asked for.

Also note that the validation change added will not be executed, but to
limit the scope of the changes I updated the validation only and didn't
try to also add in the validator. I suspect that the validation was
never added in order to ensure that 11g databases, which don't have a
PDB can be allowed.

Tested that when setting a connect string the error occurred. After the
fix, with PDB name provided, the error did not occur.
…#224)

validate_vcn_cidr.py always returning errors in bootstrap log due to
wls_vcn_cidr being empty with existing subnets.

Note that this is essentially a revert to the previous commit. The
checkin log for the commit states, "Use customer provided NSGs for
existing subnet provisioning". Therefore, I tested with existing subnet
and NSGs provided with the reversion of this line change and there were
no issues. Other tests in addition to existing subnet with NSGs
provided:
1. New VCN.
2. Existing VCN, new subnet
3. Existing subnet with security rules. In each case I tested with IDCS
added so the call to validate_vcn_cidr.py would occur. I not only made
sure the error no longer appeared, but verified that the metadata value
was present and ran validate_vcn_cidr.py by hand.
Verified issue by creating stack in a compartment without dynamic group
policies set and selecting OCI Policies checkbox. Clicked on
Instances|Instance Details and navigate to the OS Management and saw:
"No OS management information is available for this resource."
After fix ran same test and OS Managment information appears.
Uptake 23.3.3 release image values.

---------

Co-authored-by: Abhijit Paranjpe <[email protected]>
#230)

JCS-14046 Support VM.Standard.E5.Flex shape, but not as default shape.
Testing using E5.Flex shape (requires OL8.8 image):
- 14.1.1.0 JDK11 with IDCS. idcs-sample-app logged into.
- 12.2.1.4 JRF on ATP with IDCS (2 OCPU count). idcs-sample-app logged
into.
- 14.1.1.0 JDK8 with IDCS validated cloning.

Testing max cpu utilization (E5.Flex allows 94 ocpu max):
- Using same logic changes in this MR built stack with:
-- max OCPUs for Flex5 to 1
-- max OCPUs for Flex4 to 2
-- Set 2 OCPUs for Flex 5 and ran tf plan. Confirmed validation error
fired.
-- Set 3 OCPUs for Flex 4 and ran tf plan. Confirmed validation error
fired. This shows no regression in the logic changes.
Uptake marketplace image values into 23.4.3 reelase
Uptake 24.1.1 marketplace values..
JCS-14023 Status check missing from public subnet provisioning
Verified status check now showing for public subnet, private endpoint
and bastion still showing status check, and that private subnet w/o
bastion still does not attempt to get status check. Verified all
conditions using both ORM UI and CLI.
Uptake 24.1.2 mp values..
- Make the keys of the maps of compute and volumes resources to have 2
digits at the end, to conserve the iteration order, which is
lexicographical, to prevent volume attachments from being reassigned to
other instances because of the iteration order in the list of compute
instances

Tests:
- Created a non-JRF stack with new VCN, and two nodes
- Scaled up the stack to 4 nodes, verified the apply job completed
successfully and that all servers were added.
- Scaled up the stack to 10 nodes, and verified the same points above
- Scaled up the stack to 11 nodes, and made the same verifications
above, and verified that the existing block volume attachments and block
volumes where not affected
- Scaled up the stack to 20 nodes, and made the same verifications above
- Scaled up the stack to 30 nodes, and made the same verifications above
- Scaled down the stack to 10 nodes. Verified that only the artifacts 29
to 10 are deleted, and the rest of the servers are still running
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants