diff --git a/CHANGELOG.md b/CHANGELOG.md
index c61739843..561cd0d87 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -24,7 +24,10 @@ Versioning](https://semver.org/spec/v2.0.0.html).
- Added a utility script to make it easier for maintainers to propose releases,
regardless of the git remote configuration. See the previously closed
[issue](https://github.com/ComplianceAsCode/compliance-operator/issues/8) for
- more details.
+
+- There was a regression in `quay.io/compliance-operator/test-broken-content:kublet_default`
+ on OCP 4.12 cluster, which caused the e2e test to fail. Since we have fix the test image,
+ here we updated datastream xml files for the test content image.
### Deprecations
diff --git a/images/testcontent/kubelet_default/ssg-eks-ds.xml b/images/testcontent/kubelet_default/ssg-eks-ds.xml
deleted file mode 100644
index 4d1079d7a..000000000
--- a/images/testcontent/kubelet_default/ssg-eks-ds.xml
+++ /dev/null
@@ -1,8343 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Amazon Elastic Kubernetes Service
- oval:ssg-installed_app_is_eks:def:1
-
-
- Amazon Elastic Kubernetes Service 1.21
- oval:ssg-installed_app_is_eks_1_21:def:1
-
-
- Amazon Elastic Kubernetes Service Node
- oval:ssg-installed_app_is_eks_node:def:1
-
-
-
-
-
- draft
- Guide to the Secure Configuration of Amazon Elastic Kubernetes Service
- This guide presents a catalog of security-relevant
-configuration settings for Amazon Elastic Kubernetes Service. It is a rendering of
-content structured in the eXtensible Configuration Checklist Description Format (XCCDF)
-in order to support security automation. The SCAP content is
-is available in the scap-security-guide package which is developed at
-
- https://www.open-scap.org/security-policies/scap-security-guide.
-
-Providing system administrators with such guidance informs them how to securely
-configure systems under their control in a variety of network roles. Policy
-makers and baseline creators can use this catalog of settings, with its
-associated references to higher-level security control catalogs, in order to
-assist them in security baseline creation. This guide is a catalog, not a
-checklist, and satisfaction of every item is not likely to be possible or
-sensible in many operational scenarios. However, the XCCDF format enables
-granular selection and adjustment of settings, and their association with OVAL
-and OCIL content provides an automated checking capability. Transformations of
-this document, and its associated automated checking content, are capable of
-providing baselines that meet a diverse set of policy objectives. Some example
-XCCDF Profiles, which are selections of items that form checklists and
-can be used as baselines, are available with this guide. They can be
-processed, in an automated fashion, with tools that support the Security
-Content Automation Protocol (SCAP). The NIST National Checklist Program (NCP),
-which provides required settings for the United States Government, is one example
-of a baseline created from this guidance.
-
- Do not attempt to implement any of the settings in
-this guide without first testing them in a non-operational environment. The
-creators of this guidance assume no responsibility whatsoever for its use by
-other parties, and makes no guarantees, expressed or implied, about its
-quality, reliability, or any other characteristic.
-
- The ComplianceAsCode Project
-
- https://www.open-scap.org/security-policies/scap-security-guide
-
- Red Hat and Red Hat Enterprise Linux are either registered
-trademarks or trademarks of Red Hat, Inc. in the United States and other
-countries. All other names are registered trademarks or trademarks of their
-respective companies.
-
-
-
-
-
-
-
-
-
-
- 0.1.64
-
- SCAP Security Guide Project
- SCAP Security Guide Project
- Frank J Cameron (CAM1244) <cameron@ctc.com>
- 0x66656c6978 <0x66656c6978@users.noreply.github.com>
- Håvard F. Aasen <havard.f.aasen@pfft.no>
- Jack Adolph <jack.adolph@gmail.com>
- Edgar Aguilar <edgar.aguilar@oracle.com>
- Gabe Alford <redhatrises@gmail.com>
- Firas AlShafei <firas.alshafei@us.abb.com>
- Rodrigo Alvares <ralvares@redhat.com>
- Christopher Anderson <cba@fedoraproject.org>
- angystardust <angystardust@users.noreply.github.com>
- anivan-suse <anastasija.ivanovic@suse.com>
- anixon-rh <55244503+anixon-rh@users.noreply.github.com>
- Ikko Ashimine <eltociear@gmail.com>
- Chuck Atkins <chuck.atkins@kitware.com>
- ayfantis <ayfantis@localhost.localdomain>
- Ryan Ballanger <root@rballang-admin-2.fastenal.com>
- Alex Baranowski <alex@euro-linux.com>
- Eduardo Barretto <eduardo.barretto@canonical.com>
- Molly Jo Bault <Molly.Jo.Bault@ballardtech.com>
- Andrew Becker <A-Beck@users.noreply.github.com>
- Gabriel Becker <ggasparb@redhat.com>
- Alexander Bergmann <abergmann@suse.com>
- Dale Bewley <dale@bewley.net>
- Jose Luis BG <bgjoseluis@gmail.com>
- binyanling <binyanling@uniontech.com>
- Joseph Bisch <joseph.bisch@gmail.com>
- Jeffrey Blank <blank@eclipse.ncsc.mil>
- Olivier Bonhomme <ptitoliv@ptitoliv.net>
- Lance Bragstad <lbragstad@gmail.com>
- Ted Brunell <tbrunell@redhat.com>
- Marcus Burghardt <maburgha@redhat.com>
- Matthew Burket <mburket@redhat.com>
- Blake Burkhart <blake.burkhart@us.af.mil>
- Patrick Callahan <pmc@patrickcallahan.com>
- George Campbell <gcampbell@palantir.com>
- Nick Carboni <ncarboni@redhat.com>
- Carlos <64919342+carlosmmatos@users.noreply.github.com>
- James Cassell <james.cassell@ll.mit.edu>
- Frank Caviggia <fcaviggi@ra.iad.redhat.com>
- Eric Christensen <echriste@redhat.com>
- Dan Clark <danclark@redhat.com>
- Jayson Cofell <1051437+70k10@users.noreply.github.com>
- Caleb Cooper <coopercd@ornl.gov>
- Richard Maciel Costa <richard.maciel.costa@canonical.com>
- Deric Crago <deric.crago@gmail.com>
- crleekwc <crleekwc@gmail.com>
- cyarbrough76 <42849651+cyarbrough76@users.noreply.github.com>
- Maura Dailey <maura@eclipse.ncsc.mil>
- Klaas Demter <demter@atix.de>
- dhanushkar-wso2 <dhanushkar@wso2.com>
- Andrew DiPrinzio <andrew.diprinzio@jhuapl.edu>
- dom <dominique.blaze@devinci.fr>
- Jean-Baptiste Donnette <jean-baptiste.donnette@epita.fr>
- Marco De Donno <mdedonno1337@gmail.com>
- dperrone <dperrone@redhat.com>
- drax <applezip@gmail.com>
- Sebastian Dunne <sdunne@redhat.com>
- François Duthilleul <francoisduthilleul@gmail.com>
- Greg Elin <gregelin@gitmachines.com>
- eradot4027 <jrtonmac@gmail.com>
- Alexis Facques <alexis.facques@mythalesgroup.io>
- Leah Fisher <lfisher047@gmail.com>
- Yavor Georgiev <strandjata@gmail.com>
- Alijohn Ghassemlouei <alijohn@secureagc.com>
- Swarup Ghosh <swghosh@redhat.com>
- ghylock <ghylock@gmail.com>
- Andrew Gilmore <agilmore2@gmail.com>
- Joshua Glemza <jglemza@nasa.gov>
- Nick Gompper <forestgomp@yahoo.com>
- Loren Gordon <lorengordon@users.noreply.github.com>
- Patrik Greco <sikevux@sikevux.se>
- Steve Grubb <sgrubb@redhat.com>
- guangyee <gyee@suse.com>
- Marek Haicman <mhaicman@redhat.com>
- Vern Hart <vern.hart@canonical.com>
- Alex Haydock <alex@alexhaydock.co.uk>
- Rebekah Hayes <rhayes@corp.rivierautilities.com>
- Trey Henefield <thenefield@gmail.com>
- Henning Henkel <henning.henkel@helvetia.ch>
- hex2a <hex2a@users.noreply.github.com>
- John Hooks <jhooks@starscream.pa.jhbcomputers.com>
- Jakub Hrozek <jhrozek@redhat.com>
- De Huo <De.Huo@windriver.com>
- Robin Price II <robin@redhat.com>
- Yasir Imam <yimam@redhat.com>
- Jiri Jaburek <jjaburek@redhat.com>
- Keith Jackson <keithkjackson@gmail.com>
- Jeremiah Jahn <jeremiah@goodinassociates.com>
- Jakub Jelen <jjelen@redhat.com>
- Jessicahfy <Jessicahfy@users.noreply.github.com>
- Stephan Joerrens <Stephan.Joerrens@fiduciagad.de>
- Hunter Jones <hjones2199@gmail.com>
- Jono <jono@ubuntu-18.localdomain>
- justchris1 <justchris1@justchris1.email>
- Kai Kang <kai.kang@windriver.com>
- Charles Kernstock <charles.kernstock@ultra-ats.com>
- Yuli Khodorkovskiy <ykhodorkovskiy@tresys.com>
- Sherine Khoury <skhoury@redhat.com>
- Nathan Kinder <nkinder@redhat.com>
- Lee Kinser <lee.kinser@gmail.com>
- Evgeny Kolesnikov <ekolesni@redhat.com>
- Peter 'Pessoft' Kolínek <github@pessoft.com>
- Luke Kordell <luke.t.kordell@lmco.com>
- Malte Kraus <malte.kraus@suse.com>
- Seth Kress <seth.kress@dsainc.com>
- Felix Krohn <felix.krohn@helvetia.ch>
- kspargur <kspargur@kspargur.csb>
- Amit Kumar <amitkuma@redhat.com>
- Fen Labalme <fen@civicactions.com>
- Ade Lee <alee@redhat.com>
- Christopher Lee <Crleekwc@gmail.com>
- Ian Lee <lee1001@llnl.gov>
- Jarrett Lee <jarrettl@umd.edu>
- Joseph Lenox <joseph.lenox@collins.com>
- Jan Lieskovsky <jlieskov@redhat.com>
- Markus Linnala <Markus.Linnala@knowit.fi>
- Šimon Lukašík <slukasik@redhat.com>
- Milan Lysonek <mlysonek@redhat.com>
- Fredrik Lysén <fredrik@pipemore.se>
- Caitlin Macleod <caitelatte@gmail.com>
- Nick Maludy <nmaludy@gmail.com>
- Lokesh Mandvekar <lsm5@fedoraproject.org>
- Matus Marhefka <mmarhefk@redhat.com>
- Jamie Lorwey Martin <jlmartin@redhat.com>
- Carlos Matos <cmatos@redhat.com>
- Robert McAllister <rmcallis@redhat.com>
- Karen McCarron <kmccarro@redhat.com>
- Michael McConachie <michael@redhat.com>
- Marcus Meissner <meissner@suse.de>
- Khary Mendez <kmendez@redhat.com>
- Rodney Mercer <rmercer@harris.com>
- mgjadoul <mgjadoul@laptomatic.auth-o-matic.corp>
- Matt Micene <nzwulfin@gmail.com>
- Brian Millett <bmillett@gmail.com>
- Takuya Mishina <tmishina@jp.ibm.com>
- Mixer9 <35545791+Mixer9@users.noreply.github.com>
- mmosel <mmosel@kde.example.com>
- Zbynek Moravec <zmoravec@redhat.com>
- Kazuo Moriwaka <moriwaka@users.noreply.github.com>
- Michael Moseley <michael@eclipse.ncsc.mil>
- Renaud Métrich <rmetrich@redhat.com>
- Joe Nall <joe@nall.com>
- Neiloy <neiloy@redhat.com>
- Axel Nennker <axel@nennker.de>
- Michele Newman <mnewman@redhat.com>
- Sean O'Keeffe <seanokeeffe797@gmail.com>
- Jiri Odehnal <jodehnal@redhat.com>
- Ilya Okomin <ilya.okomin@oracle.com>
- Kaustubh Padegaonkar <theTuxRacer@gmail.com>
- Michael Palmiotto <mpalmiotto@tresys.com>
- Eryx Paredes <eryxp@lyft.com>
- Max R.D. Parmer <maxp@trystero.is>
- Arnaud Patard <apatard@hupstream.com>
- Jan Pazdziora <jpazdziora@redhat.com>
- pcactr <paul.c.arnold4.ctr@mail.mil>
- Kenneth Peeples <kennethwpeeples@gmail.com>
- Nathan Peters <Nathaniel.Peters@ca.com>
- Frank Lin PIAT <fpiat@klabs.be>
- Stefan Pietsch <mail.ipv4v6+gh@gmail.com>
- piggyvenus <piggyvenus@gmail.com>
- Vojtech Polasek <vpolasek@redhat.com>
- Orion Poplawski <orion@nwra.com>
- Nick Poyant <npoyant@redhat.com>
- Martin Preisler <mpreisle@redhat.com>
- Wesley Ceraso Prudencio <wcerasop@redhat.com>
- Raphael Sanchez Prudencio <rsprudencio@redhat.com>
- T.O. Radzy Radzykewycz <radzy@windriver.com>
- Kenyon Ralph <kenyon@kenyonralph.com>
- Mike Ralph <mralph@redhat.com>
- Federico Ramirez <federico.r.ramirez@oracle.com>
- rchikov <rumen.chikov@suse.com>
- Rick Renshaw <Richard_Renshaw@xtoenergy.com>
- Chris Reynolds <c.reynolds82@gmail.com>
- rhayes <rhayes@rivierautilities.com>
- Pat Riehecky <riehecky@fnal.gov>
- rlucente-se-jboss <rlucente@redhat.com>
- Juan Antonio Osorio Robles <juan.osoriorobles@eu.equinix.com>
- Matt Rogers <mrogers@redhat.com>
- Jesse Roland <jesse.roland@onyxpoint.com>
- Joshua Roys <roysjosh@gmail.com>
- rrenshaw <bofh69@yahoo.com>
- Chris Ruffalo <chris.ruffalo@gmail.com>
- rumch-se <77793453+rumch-se@users.noreply.github.com>
- Ray Shaw (Cont ARL/CISD) rvshaw <rvshaw@esme.arl.army.mil>
- Earl Sampson <ESampson@suse.com>
- sampsone <esampson@suse.com>
- Willy Santos <wsantos@redhat.com>
- Nagarjuna Sarvepalli <snagarju@redhat.com>
- Anderson Sasaki <33833274+ansasaki@users.noreply.github.com>
- Gautam Satish <gautams@hpe.com>
- Watson Sato <wsato@redhat.com>
- Satoru SATOH <satoru.satoh@gmail.com>
- Alexander Scheel <ascheel@redhat.com>
- Bryan Schneiders <pschneiders@trisept.com>
- shaneboulden <shane.boulden@gmail.com>
- Vincent Shen <47534281+Vincent056@users.noreply.github.com>
- Dhriti Shikhar <dhriti.shikhar.rokz@gmail.com>
- Spencer Shimko <sshimko@tresys.com>
- Mark Shoger <mshoger@redhat.com>
- THOBY Simon <Simon.THOBY@viveris.fr>
- Thomas Sjögren <konstruktoid@users.noreply.github.com>
- Francisco Slavin <fslavin@tresys.com>
- David Smith <dsmith@eclipse.ncsc.mil>
- Kevin Spargur <kspargur@redhat.com>
- Kenneth Stailey <kstailey.lists@gmail.com>
- Leland Steinke <leland.j.steinke.ctr@mail.mil>
- Justin Stephenson <jstephen@redhat.com>
- Brian Stinson <brian@bstinson.com>
- Jake Stookey <jakestookey@gmail.com>
- Jonathan Sturges <jsturges@redhat.com>
- Ian Tewksbury <itewk@redhat.com>
- Philippe Thierry <phil@reseau-libre.net>
- Derek Thurston <thegrit@gmail.com>
- tianzhenjia <jiatianzhen@cmss.chinamobile.com>
- Greg Tinsley <gtinsley@redhat.com>
- Paul Tittle <ptittle@cmf.nrl.navy.mil>
- tom <tom@localhost.localdomain>
- tomas.hudik <tomas.hudik@embedit.cz>
- Jeb Trayer <jeb.d.trayer@uscg.mil>
- TrilokGeer <tgeer@redhat.com>
- Viktors Trubovics <viktors.trubovics@suse.com>
- Nico Truzzolino <nico.truzzolino@gmx.de>
- Brian Turek <brian.turek@gmail.com>
- Matěj Týč <matyc@redhat.com>
- VadimDor <29509093+VadimDor@users.noreply.github.com>
- Trevor Vaughan <tvaughan@onyxpoint.com>
- vtrubovics <82443408+vtrubovics@users.noreply.github.com>
- Samuel Warren <swarren@redhat.com>
- wcushen <54533890+wcushen@users.noreply.github.com>
- Shawn Wells <shawn@shawndwells.io>
- Daniel E. White <linuxdan@users.noreply.github.com>
- Bernhard M. Wiedemann <bwiedemann@suse.de>
- Roy Williams <roywilli@roywilli.redhat.com>
- Willumpie <willumpie@xs4all.nl>
- Rob Wilmoth <rwilmoth@redhat.com>
- Lucas Yamanishi <lucas.yamanishi@onyxpoint.com>
- Xirui Yang <xirui.yang@oracle.com>
- yarunachalam <yarunachalam@suse.com>
- Guang Yee <guang.yee@suse.com>
- Achilleas John Yfantis <ayfantis@redhat.com>
- YiLin.Li <YiLin.Li@linux.alibaba.com>
- YuQing <yyq0391@163.com>
- Kevin Zimmerman <kevin.zimmerman@kitware.com>
- Luigi Mario Zuccarelli <luzuccar@redhat.com>
- Jan Černý <jcerny@redhat.com>
- Michal Šrubař <msrubar@redhat.com>
- https://github.com/ComplianceAsCode/content/releases/latest
-
-
- CIS Amazon Elastic Kubernetes Service (EKS) Benchmark - Node
- This profile defines a baseline that aligns to the Center for Internet Security®
-Amazon Elastic Kubernetes Service (EKS) Benchmark™, V1.0.1.
-
-This profile includes Center for Internet Security®
-Amazon Elastic Kubernetes Service (EKS)™ content.
-
-This profile is applicable to EKS 1.21 and greater.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- CIS Amazon Elastic Kubernetes Service Benchmark - Platform
- This profile defines a baseline that aligns to the Center for Internet Security®
-Amazon Elastic Kubernetes Service (EKS) Benchmark™, V1.0.1.
-
-This profile includes Center for Internet Security®
-Amazon Elastic Kubernetes Service (EKS)™ content.
-
-This profile is applicable to EKS 1.21 and greater.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Introduction
- The purpose of this guidance is to provide security configuration
-recommendations and baselines for Amazon Elastic Kubernetes Service.
-The guide is intended for system and/or application administrators. Readers are assumed to
-possess basic system administration skills for the application's operating systems, as well
-as some familiarity with the product's documentation and administration
-conventions. Some instructions within this guide are complex.
-All directions should be followed completely and with understanding of
-their effects in order to avoid serious adverse effects on the system
-and its security.
-
- General Principles
- The following general principles motivate much of the advice in this
-guide and should also influence any configuration decisions that are
-not explicitly covered.
-
- Encrypt Transmitted Data Whenever Possible
- Data transmitted over a network, whether wired or wireless, is susceptible
-to passive monitoring. Whenever practical solutions for encrypting
-such data exist, they should be applied. Even if data is expected to
-be transmitted only over a local network, it should still be encrypted.
-Encrypting authentication data, such as passwords, is particularly
-important. Networks of Amazon Elastic Kubernetes Service machines can and should be configured
-so that no unencrypted authentication data is ever transmitted between
-machines.
-
-
- Least Privilege
- Grant the least privilege necessary for user accounts and software to perform tasks.
-For example, sudo can be implemented to limit authorization to super user
-accounts on the system only to designated personnel. Another example is to limit
-logins on server systems to only those administrators who need to log into them in
-order to perform administration tasks.
-
-
- Run Different Network Services on Separate Systems
- Whenever possible, a server should be dedicated to serving exactly one
-network service. This limits the number of other services that can
-be compromised in the event that an attacker is able to successfully
-exploit a software flaw in one network service.
-
-
- Configure Security Tools to Improve System Robustness
- Several tools exist which can be effectively used to improve a system's
-resistance to and detection of unknown attacks. These tools can improve
-robustness against attack at the cost of relatively little configuration
-effort.
-
-
-
- How to Use This Guide
- Readers should heed the following points when using the guide.
-
- Formatting Conventions
- Commands intended for shell execution, as well as configuration file text,
-are featured in a monospace font. Italics are used
-to indicate instances where the system administrator must substitute
-the appropriate information into a command or configuration file.
-
-
- Read Sections Completely and in Order
- Each section may build on information and recommendations discussed in
-prior sections. Each section should be read and understood completely;
-instructions should never be blindly applied. Relevant discussion may
-occur after instructions for an action.
-
-
- Reboot Required
- A system or service reboot is implicitly required after some actions in order to
-complete the reconfiguration of the system. In many cases, the changes
-will not take effect until a reboot is performed. In order to ensure
-that changes are applied properly and to test functionality, always
-reboot the system after applying a set of recommendations from this guide.
-
-
- Root Shell Environment Assumed
- Most of the actions listed in this document are written with the
-assumption that they will be executed by the root user running the
-/bin/bash shell. Commands preceded with a hash mark (#)
-assume that the administrator will execute the commands as root, i.e.
-apply the command via sudo whenever possible, or use
-su to gain root privileges if sudo cannot be
-used. Commands which can be executed as a non-root user are are preceded
-by a dollar sign ($) prompt.
-
-
- Test in Non-Production Environment
- This guidance should always be tested in a non-production environment
-before deployment. This test environment should simulate the setup in
-which the system will be deployed as closely as possible.
-
-
-
-
- Kubernetes Settings
- Each section of this configuration guide includes information about the
-configuration of a Kubernetes cluster and a set of recommendations for
-hardening the configuration. For each hardening recommendation, information
-on how to implement the control and/or how to verify or audit the control
-is provided. In some cases, remediation information is also provided.
-
-Some of the settings in the hardening guide are in place by default. The
-audit information for these settings is provided in order to verify that
-the cluster admininstrator has not made changes that would be less secure.
-A small number of items require configuration.
-
-Finally, there are some recommendations that require decisions by the
-system operator, such as audit log size, retention, and related settings.
-
- Root of files obtained from OCP nodes
- When scanning OpenShift clusters, some settings are not exposed as files.
-In the case that they are exported from the cluster (typically as yaml files),
-this variable determines the directory where they will end up.
- /kubernetes-api-resources
-
-
- Kubernetes - Account and Access Control
- In traditional Unix security, if an attacker gains
-shell access to a certain login account, they can perform any action
-or access any file to which that account has access. The same
-idea applies to cloud technology such as Kubernetes. Therefore,
-making it more difficult for unauthorized people to gain shell
-access to accounts, particularly to privileged accounts, is a
-necessary part of securing a system. This section introduces
-mechanisms for restricting access to accounts under
-Kubernetes.
-
- Use Dedicated Service Accounts
- Kubernetes workloads should not use cluster node service accounts to
-authenticate to Amazon EKS APIs. Each Kubernetes workload that needs to
-authenticate to other AWS services using AWS IAM should be provisioned with a
-dedicated Service account.
- 5.2.1
- Manual approaches for authenticating Kubernetes workloads running on Amazon
-EKS against AWS APIs are: storing service account keys as a Kubernetes secret
-(which introduces manual key rotation and potential for key compromise); or
-use of the underlying nodes' IAM Service account, which violates the
-principle of least privilege on a multi-tenanted node, when one pod needs
-to have access to a service, but every other pod on the node that uses the
-Service account does not.
- CCE-87818-1
-
-
-
-
-
-
- Authentication
- In cloud workloads, there are many ways to create and configure
-to multiple authentication services. Some of these authentication
-methods by not be secure or common methodologies, or they may not
-be secure by default. This section introduces mechanisms for
-configuring authentication systems Kubernetes.
-
- OAuth Token Inactivity Timeout
- Enter OAuth Token Inactivity Timeout
- 10m0s
- 10m0s
-
-
- Manage Users with AWS IAM
- Amazon EKS uses IAM to provide authentication to your Kubernetes cluster
-through the AWS IAM Authenticator for Kubernetes. You can configure the stock
-kubectl client to work with Amazon EKS by installing the AWS IAM
-Authenticator for Kubernetes and modifying your kubectl configuration file to
-use it for authentication.
- 5.5.1
- On- and off-boarding users is often difficult to automate and prone to error.
-Using a single source of truth for user permissions reduces the number of
-locations that an individual must be off-boarded from, and prevents users
-gaining unique permissions sets that increase the cost of audit.
- CCE-86301-9
-
-
-
-
-
-
- Kubernetes - General Security Practices
- Contains evaluations for general security practices for operating a Kubernetes environment.
-
- Consider Fargate for Untrusted Workloads
- It is Best Practice to restrict or fence untrusted workloads when running in
-a multi-tenant environment.
- 5.6.1
- AWS Fargate is a technology that provides on-demand, right-sized compute
-capacity for containers. With AWS Fargate, you no longer have to provision,
-configure, or scale groups of virtual machines to run containers. This
-removes the need to choose server types, decide when to scale your node
-groups, or optimize cluster packing.
-
-You can control which pods start on Fargate and how they run with Fargate
-profiles, which are defined as part of your Amazon EKS cluster.
-
-Amazon EKS integrates Kubernetes with AWS Fargate by using controllers that
-are built by AWS using the upstream, extensible model provided by Kubernetes.
-These controllers run as part of the Amazon EKS managed Kubernetes control
-plane and are responsible for scheduling native Kubernetes pods onto Fargate.
-The Fargate controllers include a new scheduler that runs alongside the
-default Kubernetes scheduler in addition to several mutating and validating
-admission controllers. When you start a pod that meets the criteria for
-running on Fargate, the Fargate controllers running in the cluster recognize,
-update, and schedule the pod onto Fargate.
-
-Each pod running on Fargate has its own isolation boundary and does not share
-the underlying kernel, CPU resources, memory resources, or elastic network
-interface with another pod.
- CCE-89091-3
-
-
-
-
-
-
- Kubernetes Kubelet Settings
- The Kubernetes Kubelet is an agent that runs on each node in the cluster. It
-makes sure that containers are running in a pod.
-
-The kubelet takes a set of PodSpecs that are provided through various
-mechanisms and ensures that the containers described in those PodSpecs are
-running and healthy. The kubelet doesn’t manage containers which were not
-created by Kubernetes.
-
- Configure Kubelet Event Limit
- Maximum event creations per second.
- 5
-
-
- kubelet - Authorization Options
- ABAC - Attribute-Based Access Control (ABAC) mode allows you to configure policies using local files.
-RBAC - Role-based access control (RBAC) mode allows you to create and store policies using the Kubernetes API.
-Webhook - WebHook is an HTTP callback mode that allows you to manage authorization using a remote REST endpoint.
-Node Node - authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets.
-AlwaysDeny - This flag blocks all requests. Use this flag only for testing.
- Webhook
- ABAC
- RBAC
- Webhook
- Node
- AlwaysDeny
-
-
- Configure Kubelet EvictonHard Image FS Avilable
- Image FS Available for the EvictonHard threshold to trigger.
- 10%
- 5%
- 10%
- 15%
- 20%
-
-
- Configure Kubelet EvictonHard Image FS inodes Free
- Image FS inodes Free for the EvictonHard threshold to trigger.
- 5%
- 5%
- 10%
- 15%
- 20%
-
-
- Configure Kubelet EvictonHard Memory Avilable
- Memory Available for the EvictonHard threshold to trigger.
- 200Mi
-
-
- Configure Kubelet EvictonHard NodeFS Available
- Node FS Available for the EvictonHard threshold to trigger.
- 5%
- 5%
- 10%
- 15%
- 20%
-
-
- Configure Kubelet EvictonHard Node FS inodes Free
- Node FS inodes Free for the EvictonHard threshold to trigger.
- 4%
- 4%
- 5%
- 10%
- 15%
- 20%
-
-
- Configure Kubelet EvictionSoft Image FS Avilable
- Image FS Available for the EvictionSoft threshold to trigger.
- 15%
- 5%
- 10%
- 15%
- 20%
-
-
- Configure Kubelet EvictionSoft Image FS inodes Free
- Image FS inodes Free for the EvictionSoft threshold to trigger.
- 10%
- 5%
- 10%
- 15%
- 20%
-
-
- Configure Kubelet EvictionSoft Memory Avilable
- Memory Available for the EvictionSoft threshold to trigger.
- 500Mi
-
-
- Configure Kubelet EvictionSoft NodeFS Available
- Node FS Available for the EvictionSoft threshold to trigger.
- 10%
- 5%
- 10%
- 15%
- 20%
-
-
- Configure Kubelet EvictionSoft Node FS inodes Free
- Node FS inodes Free for the EvictionSoft threshold to trigger.
- 5%
- 5%
- 10%
- 15%
- 20%
-
-
- Configure Kubelet use of the Strong Cryptographic Ciphers
- Cryptographic Ciphers Available for Kubelet, seperated by comma
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
-
-
- Configure Kubelet use of the Strong Cryptographic Ciphers
- Cryptographic Ciphers Available for Kubelet
- ^(TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384|TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384|TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256|TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256)$
-
-
- Configure which node to scan based on role
- Configure which node to scan based on role
- worker
- master
-
-
- Streaming Connection Timeout Options
- Time until connection timeouts. Use (s) for seconds, (m) for minutes,
-and (h) for hours.
- 5m0s
- 5m0s
- 10m0s
- 30m0s
- 1h
- 2h
- 4h
- 6h
- 8h
-
-
- Disable Anonymous Authentication to the Kubelet
- By default, anonymous access to the Kubelet server is enabled. This
-configuration check ensures that anonymous requests to the Kubelet
-server are disabled. Edit the Kubelet server configuration file
-/etc/kubernetes/kubelet/kubelet-config.json on the kubelet node(s)
-and set the below parameter:
-
-authentication:
- ...
- anonymous:
- enabled: false
- ...
-
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.2.1
- When enabled, requests that are not rejected by other configured
-authentication methods are treated as anonymous requests. These
-requests are then served by the Kubelet server. OpenShift Operators should
-rely on authentication to authorize access and disallow anonymous
-requests.
-
-
-
-
-
-
-
-
-
- Ensure authorization is set to Webhook
- Unauthenticated/unauthorized users should have no access to OpenShift nodes.
-The Kubelet should be set to only allow Webhook authorization.
-To ensure that the Kubelet requires authorization,
-validate that authorization is configured to Webhook
-in /etc/kubernetes/kubelet/kubelet-config.json:
-
-authorization:
- mode: Webhook
- ...
-
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.2.2
- Ensuring that the authorization is configured correctly helps enforce that
-unauthenticated/unauthorized users have no access to OpenShift nodes.
-
-
-
-
-
-
-
-
-
- kubelet - Configure the Client CA Certificate
- By default, the kubelet is not configured with a CA certificate which
-can subject the kubelet to man-in-the-middle attacks.
-
-To configure a client CA certificate, edit the kubelet configuration
-file /etc/kubernetes/kubelet/kubelet-config.json
-on the kubelet node(s) and set the below parameter:
-
-authentication:
-...
- x509:
- clientCAFile: /etc/kubernetes/pki/ca.crt
-...
-
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.2.3
- Not having a CA certificate for the kubelet will subject the kubelet to possible
-man-in-the-middle attacks especially on unsafe or untrusted networks.
-Certificate validation for the kubelet allows the API server to validate
-the kubelet's identity.
-
-
-
-
-
-
-
-
-
- kubelet - Hostname Override handling
- Normally, OpenShift lets the kubelet get the hostname from either the
-cloud provider itself, or from the node's hostname. This ensures that
-the PKI allocated by the deployment uses the appropriate values, is valid
-and keeps working throughout the lifecycle of the cluster. IP addresses
-are not used, and hence this makes it easier for security analysts to
-associate kubelet logs with the appropriate node.
- CIP-003-3 R6
- CIP-004-3 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- 3.2.8
- Allowing hostnames to be overridden creates issues around resolving nodes
-in addition to TLS configuration, certificate validation, and log correlation
-and validation.
-
-
-
-
-
-
- kubelet - Enable Certificate Rotation
- To enable the kubelet to rotate client certificates, edit the kubelet configuration
-file /etc/kubernetes/kubelet/kubelet-config.json
-on the kubelet node(s) and set the below parameter:
-
-...
-rotateCertificates: true
-...
-
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.2.10
- Allowing the kubelet to auto-update the certificates ensure that there is no downtime
-in certificate renewal as well as ensures confidentiality and integrity.
-
-
-
-
-
-
-
-
-
- kubelet - Enable Client Certificate Rotation
- To enable the kubelet to rotate client certificates, edit the kubelet configuration
-file /etc/kubernetes/kubelet/kubelet-config.json
-on the kubelet node(s) and set the below parameter:
-
-featureGates:
-...
- RotateKubeletClientCertificate: true
-...
-
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.2.10
- Allowing the kubelet to auto-update the certificates ensure that there is no downtime
-in certificate renewal as well as ensures confidentiality and integrity.
-
-
-
-
-
-
-
-
-
- kubelet - Allow Automatic Firewall Configuration
- The kubelet has the ability to automatically configure the firewall to allow
-the containers required ports and connections to networking resources and destinations
-parameters potentially creating a security incident.
-To allow the kubelet to modify the firewall, edit the kubelet configuration
-file /etc/kubernetes/kubelet/kubelet-config.json
-on the kubelet node(s) and set the below parameter:
-makeIPTablesUtilChains: true
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.2.7
- The kubelet should automatically configure the firewall settings to allow access and
-networking traffic through. This ensures that when a pod or container is running that
-the correct ports are configured as well as removing the ports when a pod or
-container is no longer in existence.
-
-
-
-
-
-
-
-
-
- kubelet - Enable Protect Kernel Defaults
-
-
-Protect tuned kernel parameters from being overwritten by the kubelet.
-
-
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.2.6
- Kernel parameters are usually tuned and hardened by the system administrators
-before putting the systems into production. These parameters protect the
-kernel and the system. Your kubelet kernel defaults that rely on such
-parameters should be appropriately set to match the desired secured system
-state. Ignoring this could potentially lead to running pods with undesired
-kernel behavior.
-
-
-
-
-
-
-
-
-
- kubelet - Enable Server Certificate Rotation
- To enable the kubelet to rotate server certificates, edit the kubelet configuration
-file /etc/kubernetes/kubelet/kubelet-config.json
-on the kubelet node(s) and set the below parameter:
-
-featureGates:
-...
- RotateKubeletServerCertificate: true
-...
-
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.2.11
- Allowing the kubelet to auto-update the certificates ensure that there is no downtime
-in certificate renewal as well as ensures confidentiality and integrity.
-
-
-
-
-
-
-
-
-
- kubelet - Do Not Disable Streaming Timeouts
- Timouts for streaming connections should not be disabled as they help to prevent
-denial-of-service attacks.
-To configure streaming connection timeouts, edit the kubelet configuration
-file /etc/kubernetes/kubelet/kubelet-config.json
-on the kubelet node(s) and set the below parameter:
-streamingConnectionIdleTimeout:
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.2.5
- Ensuring connections have timeouts helps to protect against denial-of-service attacks as
-well as disconnect inactive connections. In addition, setting connections timeouts helps
-to prevent from running out of ephemeral ports.
-
-
-
-
-
-
-
-
-
-
- kubelet - Ensure that the --read-only-port is secured
- Disable the read-only port.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- 3.2.4
- The Kubelet process provides a read-only API in addition to the main Kubelet API.
-Unauthenticated access is provided to this read-only API which could possibly retrieve
-potentially sensitive information about the cluster.
-
-
-
-
-
-
-
-
-
-
- OpenShift - Logging Settings
- Contains evaluations for the cluster's logging configuration settings.
-
- Configure the OpenShift Audit Profile
- Audit log profiles define how to log requests that come to the OpenShift
-API server, the Kubernetes API server, and the OAuth API server.
- Default
- Default
- WriteRequestBodies
- AllRequestBodies
-
-
- Ensure Audit Logging is Enabled
- The audit logs are part of the EKS managed Kubernetes control plane logs that
-are managed by Amazon EKS. Amazon EKS is integrated with AWS CloudTrail, a
-service that provides a record of actions taken by a user, role, or an AWS
-service in Amazon EKS. CloudTrail captures all API calls for Amazon EKS as
-events. The calls captured include calls from the Amazon EKS console and code
-calls to the Amazon EKS API operations.
- 2.1.1
- Exporting logs and metrics to a dedicated, persistent datastore such as
-CloudTrail ensures availability of audit data following a cluster security
-event, and provides a central location for analysis of log and metric data
-collated from multiple sources.
- CCE-87445-3
-
-
-
-
-
-
- Kubernetes - Network Configuration and Firewalls
- Most systems must be connected to a network of some
-sort, and this brings with it the substantial risk of network
-attack. This section discusses the security impact of decisions
-about networking which must be made when configuring a system.
-
-This section also discusses firewalls, network access
-controls, and other network security frameworks, which allow
-system-level rules to be written that can limit an attackers' ability
-to connect to your system. These rules can specify that network
-traffic should be allowed or denied from certain IP addresses,
-hosts, and networks. The rules can also specify which of the
-system's network services are available to particular hosts or
-networks.
-
- Ensure that application Namespaces have Network Policies defined.
- Use network policies to isolate traffic in your cluster network.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/networking.k8s.io/v1/networkpolicies
- API endpoint, filter with with the jq utility using the following filter
- [.items[] | select((.metadata.namespace | startswith("openshift") | not) and (.metadata.namespace | startswith("kube-") | not) and .metadata.namespace != "default") | .metadata.namespace] | unique
- and persist it to the local
- /apis/networking.k8s.io/v1/networkpolicies#51742b3e87275db9eb7fc6c0286a9e536178a2a83e3670b615ceaf545e7fd300
- file.
- /api/v1/namespaces
- API endpoint, filter with with the jq utility using the following filter
- [.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default")]
- and persist it to the local
- /api/v1/namespaces#34d4beecc95c65d815d9d48fd4fdcb0c521631852ad088ef74e36d012b0e1e0d
- file.
-
- CIP-003-8 R4
- CIP-003-8 R4.2
- CIP-003-8 R5
- CIP-003-8 R6
- CIP-004-6 R2.2.4
- CIP-004-6 R3
- CIP-007-3 R2
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R6.1
- AC-4
- AC-4(21)
- CA-3(5)
- CM-6
- CM-6(1)
- CM-7
- CM-7(1)
- SC-7
- SC-7(3)
- SC-7(5)
- SC-7(8)
- SC-7(12)
- SC-7(13)
- SC-7(18)
- SC-7(10)
- SI-4(22)
- Req-1.1.4
- Req-1.2
- Req-1.2.1
- Req-1.3.1
- Req-1.3.2
- Req-2.2
- SRG-APP-000038-CTR-000105
- SRG-APP-000039-CTR-000110
- SRG-APP-000141-CTR-000315
- SRG-APP-000141-CTR-000320
- SRG-APP-000142-CTR-000325
- SRG-APP-000142-CTR-000330
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- SRG-APP-000645-CTR-001410
- 4.3.2
- Running different applications on the same Kubernetes cluster creates a risk of one
-compromised application attacking a neighboring application. Network segmentation is
-important to ensure that containers can communicate only with those they are supposed
-to. When a network policy is introduced to a given namespace, all traffic not allowed
-by the policy is denied. However, if there are no network policies in a namespace all
-traffic will be allowed into and out of the pods in that namespace.
-
-
-
-
-
-
-
-
-
- Ensure Network Policy is Enabled
- Use Network Policy to restrict pod to pod traffic within a cluster and
-segregate workloads.
- 5.4.4
- By default, all pod to pod traffic within a cluster is allowed. Network
-Policy creates a pod- level firewall that can be used to restrict traffic
-between sources. Pod traffic is restricted by having a Network Policy that
-selects it (through the use of labels). Once there is any Network Policy in a
-namespace selecting a particular pod, that pod will reject any connections
-that are not allowed by any Network Policy. Other pods in the namespace that
-are not selected by any Network Policy will continue to accept all traffic.
-
-Network Policies are managed via the Kubernetes Network Policy API and
-enforced by a network plugin, simply creating the resource without a
-compatible network plugin to implement it will have no effect. EKS supports
-Network Policy enforcement through the use of Calico.
- CCE-88207-6
-
-
-
-
-
- Encrypt Traffic to Load Balancers and Workloads
- Encrypt traffic to HTTPS load balancers using TLS certificates.
- 5.4.5
- Encrypting traffic between users and your Kubernetes workload is fundamental
-to protecting data sent over the web.
- CCE-89133-3
-
-
-
-
-
- Restrict Access to the Control Plane Endpoint
- Enable Endpoint Private Access to restrict access to the cluster's control
-plane to only an allowlist of authorized IPs.
- 5.4.1
- Authorized networks are a way of specifying a restricted range of IP
-addresses that are permitted to access your cluster's control plane.
-Kubernetes Engine uses both Transport Layer Security (TLS) and authentication
-to provide secure access to your cluster's control plane from the public
-internet. This provides you the flexibility to administer your cluster from
-anywhere; however, you might want to further restrict access to a set of IP
-addresses that you control. You can set this restriction by specifying an
-authorized network. Restricting access to an authorized network can provide
-additional security benefits for your container cluster, including:
-
-Better protection from outsider attacks: Authorized networks provide an
-additional layer of security by limiting external access to a specific set
-of addresses you designate, such as those that originate from your
-premises. This helps protect access to your cluster in the case of a
-vulnerability in the cluster's authentication or authorization
-mechanism.Better protection from insider attacks: Authorized networks help protect
-your cluster from accidental leaks of master certificates from your
-company's premises. Leaked certificates used from outside Amazon EC2 and
-outside the authorized IP ranges (for example, from addresses outside your
-company) are still denied access.
- CCE-86182-3
-
-
-
-
-
- Ensure Private Endpoint Access
- Disable access to the Kubernetes API from outside the node network if it is
-not required.
- 5.4.2
- In a private cluster, the master node has two endpoints, a private and public
-endpoint. The private endpoint is the internal IP address of the master,
-behind an internal load balancer in the master's VPC network. Nodes
-communicate with the master using the private endpoint. The public endpoint
-enables the Kubernetes API to be accessed from outside the master's VPC
-network.
-
-Although Kubernetes API requires an authorized token to perform sensitive
-actions, a vulnerability could potentially expose the Kubernetes publically
-with unrestricted access. Additionally, an attacker may be able to identify
-the current cluster and Kubernetes API version and determine whether it is
-vulnerable to an attack. Unless required, disabling public endpoint will help
-prevent such threats, and require the attacker to be on the master's VPC
-network to perform any attack on the Kubernetes API.
- CCE-88813-1
-
-
-
-
-
- Ensure Cluster Private Nodes
- Disable public IP addresses for cluster nodes, so that they only have private
-IP addresses. Private Nodes are nodes with no public IP addresses.
- 5.4.3
- Disabling public IP addresses on cluster nodes restricts access to only
-internal networks, forcing attackers to obtain local network access before
-attempting to compromise the underlying Kubernetes hosts.
- CCE-88669-7
-
-
-
-
-
-
- Kubernetes - Registry Security Practices
- Contains evaluations for Kubernetes registry security practices, and cluster-wide registry configuration.
-
- Only use approved container registries
- Use approved container registries.
- 5.1.4
- Allowing unrestricted access to external container registries provides the
-opportunity for malicious or unapproved containers to be deployed into the
-cluster. Allowlisting only approved container registries reduces this risk.
- CCE-86901-6
-
-
-
-
-
- Ensure Image Vulnerability Scanning
- Scan images being deployed to Amazon EKS for vulnerabilities.
- 5.1.1
- Vulnerabilities in software packages can be exploited by hackers or malicious
-users to obtain unauthorized access to local cloud resources. Amazon ECR and
-other third party products allow images to be scanned for known
-vulnerabilities.
- CCE-88990-7
-
-
-
-
-
- Ensure Cluster Service Account with read-only access to Amazon ECR
- Configure the Cluster Service Account with Storage Object Viewer Role to only
-allow read- only access to Amazon ECR.
- 5.1.3
- The Cluster Service Account does not require administrative access to Amazon
-ECR, only requiring pull access to containers to deploy onto Amazon EKS.
-Restricting permissions follows the principles of least privilege and
-prevents credentials from being abused beyond the required role.
- CCE-86681-4
-
-
-
-
-
- Minimize user access to Amazon ECR
- Restrict user access to Amazon ECR, limiting interaction with build images to
-only authorized personnel and service accounts.
- 5.1.2
- Weak access control to Amazon ECR may allow malicious users to replace built
-images with vulnerable containers.
- CCE-89643-1
-
-
-
-
-
-
- Kubernetes Secrets Management
- Secrets let you store and manage sensitive information,
-such as passwords, OAuth tokens, and ssh keys.
-Such information might otherwise be put in a Pod
-specification or in an image.
-
- Ensure Kubernetes Secrets are Encrypted
- Encrypt Kubernetes secrets, stored in etcd, using secrets encryption feature
-during Amazon EKS cluster creation.
- 5.3.1
- Kubernetes can store secrets that pods can access via a mounted volume.
-Today, Kubernetes secrets are stored with Base64 encoding, but encrypting is
-the recommended approach. Amazon EKS clusters version 1.13 and higher support
-the capability of encrypting your Kubernetes secrets using AWS Key Management
-Service (KMS) Customer Managed Keys (CMK). The only requirement is to enable
-the encryption provider support during EKS cluster creation.
-
-Use AWS Key Management Service (KMS) keys to provide envelope encryption of
-Kubernetes secrets stored in Amazon EKS. Implementing envelope encryption is
-considered a security best practice for applications that store sensitive
-data and is part of a defense in depth security strategy.
-
-Application-layer Secrets Encryption provides an additional layer of security
-for sensitive data, such as user defined Secrets and Secrets required for the
-operation of the cluster, such as service account keys, which are all stored
-in etcd.
-
-Using this functionality, you can use a key, that you manage in AWS KMS, to
-encrypt data at the application layer. This protects against attackers in the
-event that they manage to gain access to etcd.
- CCE-90708-9
-
-
-
-
-
-
- Kubernetes - Worker Node Settings
- Contains evaluations for the worker node configuration settings.
-
- Verify Group Who Owns The Kubelet Configuration File
- To properly set the group owner of /etc/kubernetes/kubelet/kubelet-config.json, run the command: $ sudo chgrp root /etc/kubernetes/kubelet/kubelet-config.json
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.1.4
- The kubelet configuration file contains information about the configuration of the
-OpenShift node that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
-
-
-
-
-
-
-
-
- Verify Group Who Owns The Worker Kubeconfig File
- To properly set the group owner of /var/lib/kubelet/kubeconfig, run the command: $ sudo chgrp root /var/lib/kubelet/kubeconfig
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- The worker kubeconfig file contains information about the administrative configuration of the
-OpenShift cluster that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
-
-
-
-
-
-
-
-
- Verify User Who Owns The Kubelet Configuration File
- To properly set the owner of /etc/kubernetes/kubelet/kubelet-config.json, run the command: $ sudo chown root /etc/kubernetes/kubelet/kubelet-config.json
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.1.4
- The kubelet configuration file contains information about the configuration of the
-OpenShift node that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
-
-
-
-
-
-
-
-
- Verify User Who Owns The Worker Kubeconfig File
- To properly set the owner of /var/lib/kubelet/kubeconfig, run the command: $ sudo chown root /var/lib/kubelet/kubeconfig
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.1.2
- The worker kubeconfig file contains information about the administrative configuration of the
-OpenShift cluster that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
-
-
-
-
-
-
-
-
- Verify Permissions on The Kubelet Configuration File
-
-To properly set the permissions of /etc/kubernetes/kubelet/kubelet-config.json, run the command:
-$ sudo chmod 0644 /etc/kubernetes/kubelet/kubelet-config.json
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.1.3
- If the kubelet configuration file is writable by a group-owner or the
-world the risk of its compromise is increased. The file contains the configuration of
-an OpenShift node that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
-
-
-
-
-
-
-
-
- Verify Permissions on the Worker Kubeconfig File
-
-To properly set the permissions of /var/lib/kubelet/kubeconfig, run the command:
-$ sudo chmod 0644 /var/lib/kubelet/kubeconfig
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 3.1.1
- If the worker kubeconfig file is writable by a group-owner or the
-world the risk of its compromise is increased. The file contains the administration configuration of the
-OpenShift cluster that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- combine_ovals.py from SCAP Security Guide
- ssg: [0, 1, 64], python: 3.10.6
- 5.11
- 2022-08-11T18:55:39
-
-
-
-
- Ensure that application Namespaces have Network Policies defined.
-
- Amazon Elastic Kubernetes Service
-
- Ensure that application Namespaces have Network Policies defined
-
-
-
-
-
-
-
-
-
-
- Verify Group Who Owns The Kubelet Configuration File
-
- Amazon Elastic Kubernetes Service
-
- This test makes sure that /etc/kubernetes/kubelet/kubelet-config.json is group owned by 0.
-
-
-
-
-
-
-
-
- Verify Group Who Owns The Worker Kubeconfig File
-
- Amazon Elastic Kubernetes Service
-
- This test makes sure that /var/lib/kubelet/kubeconfig is group owned by 0.
-
-
-
-
-
-
-
-
- Verify User Who Owns The Kubelet Configuration File
-
- Amazon Elastic Kubernetes Service
-
- This test makes sure that /etc/kubernetes/kubelet/kubelet-config.json is owned by 0.
-
-
-
-
-
-
-
-
- Verify User Who Owns The Worker Kubeconfig File
-
- Amazon Elastic Kubernetes Service
-
- This test makes sure that /var/lib/kubelet/kubeconfig is owned by 0.
-
-
-
-
-
-
-
-
- Verify Permissions on The Kubelet Configuration File
-
- Amazon Elastic Kubernetes Service
-
- This test makes sure that /etc/kubernetes/kubelet/kubelet-config.json has mode 0644.
- If the target file or directory has an extended ACL, then it will fail the mode check.
-
-
-
-
-
-
-
-
-
- Verify Permissions on the Worker Kubeconfig File
-
- Amazon Elastic Kubernetes Service
-
- This test makes sure that /var/lib/kubelet/kubeconfig has mode 0644.
- If the target file or directory has an extended ACL, then it will fail the mode check.
-
-
-
-
-
-
-
-
-
- Disable Anonymous Authentication to the Kubelet
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.authentication.anonymous.enabled' all: value equals 'false'
-
-
-
-
-
-
-
-
- Ensure authorization is set to Webhook
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.authorization.mode' all: value equals 'AlwaysAllow'
-
-
-
-
-
-
-
-
- kubelet - Configure the Client CA Certificate
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.authentication.x509.clientCAFile' all: value equals '/etc/kubernetes/pki/ca.crt'
-
-
-
-
-
-
-
-
- kubelet - Hostname Override handling
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.hostname-override' all: value equals '.*'
-
-
-
-
-
-
-
-
- kubelet - Enable Certificate Rotation
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.rotateCertificates' all: value equals 'true'
-
-
-
-
-
-
-
-
- kubelet - Enable Client Certificate Rotation
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.featureGates.RotateKubeletClientCertificate' all: value equals 'false'
-
-
-
-
-
-
-
-
- kubelet - Allow Automatic Firewall Configuration
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.makeIPTablesUtilChains' all: value equals 'true'
-
-
-
-
-
-
-
-
- kubelet - Enable Protect Kernel Defaults
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.protectKernelDefaults' all: value equals 'true'
-
-
-
-
-
-
-
-
- kubelet - Enable Server Certificate Rotation
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.featureGates.RotateKubeletServerCertificate' all: value equals 'true'
-
-
-
-
-
-
-
-
- kubelet - Do Not Disable Streaming Timeouts
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.streamingConnectionIdleTimeout' all:
-
-
-
-
-
-
-
-
- kubelet - Ensure that the --read-only-port is secured
-
- Amazon Elastic Kubernetes Service
-
- In the YAML/JSON file '/etc/kubernetes/kubelet/kubelet-config.json' at path '.readOnlyPort' all: value equals '0'
-
-
-
-
-
-
-
-
- package_GConf2_installed
-
- Amazon Elastic Kubernetes Service
-
- The RPM package GConf2 should be installed.
-
-
-
-
-
-
-
-
- package_avahi_installed
-
- Amazon Elastic Kubernetes Service
-
- The RPM package avahi should be installed.
-
-
-
-
-
-
-
-
- package_dconf_installed
-
- Amazon Elastic Kubernetes Service
-
- The RPM package dconf should be installed.
-
-
-
-
-
-
-
-
- package_esc_installed
-
- Amazon Elastic Kubernetes Service
-
- The RPM package esc should be installed.
-
-
-
-
-
-
-
-
- package_gdm_installed
-
- Amazon Elastic Kubernetes Service
-
- The RPM package gdm should be installed.
-
-
-
-
-
-
-
-
- package_pam_ldap_removed
-
- Amazon Elastic Kubernetes Service
-
- The RPM package pam_ldap should be removed.
-
-
-
-
-
-
-
-
- package_prelink_removed
-
- Amazon Elastic Kubernetes Service
-
- The RPM package prelink should be removed.
-
-
-
-
-
-
-
-
- package_samba-common_removed
-
- Amazon Elastic Kubernetes Service
-
- The RPM package samba-common should be removed.
-
-
-
-
-
-
-
-
- service_syslog_disabled
-
- Amazon Elastic Kubernetes Service
-
- The syslog service should be disabled if possible.
-
-
-
-
-
-
-
-
-
-
-
-
- sshd_includes_config_files
-
- Amazon Elastic Kubernetes Service
-
- Check presence of Include /etc/ssh/sshd_config.d/*.conf in /etc/ssh/sshd_config
-
-
-
-
-
-
-
-
- Check pam_faillock Existence in system-auth
-
- Amazon Elastic Kubernetes Service
-
- Check that pam_faillock.so exists in system-auth
-
-
-
-
-
-
-
-
- Check pam_pwquality Existence in system-auth
-
- Amazon Elastic Kubernetes Service
-
- Check that pam_pwquality.so exists in system-auth
-
-
-
-
-
-
-
-
- Record Any Attempts to Run semanage
-
- Amazon Elastic Kubernetes Service
-
- Test if auditctl is in use for audit rules.
-
-
-
-
-
-
-
-
- Record Any Attempts to Run semanage
-
- Amazon Elastic Kubernetes Service
-
- Test if augenrules is enabled for audit rules.
-
-
-
-
-
-
-
-
- Record Events that Modify the System's Network Environment
-
- Amazon Elastic Kubernetes Service
-
- The network environment should not be modified by anything other than
- administrator action. Any change to network parameters should be audited.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Record Events that Modify the System's Network Environment
-
- Amazon Elastic Kubernetes Service
-
- The network environment should not be modified by anything other than
- administrator action. Any change to network parameters should be audited.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 'log_file' Not Set In /etc/audit/auditd.conf
-
- Amazon Elastic Kubernetes Service
-
- Verify 'log_file' is not set in /etc/audit/auditd.conf.
-
-
-
-
-
-
-
-
- 'log_group' Not Set To 'root' In /etc/audit/auditd.conf
-
- Amazon Elastic Kubernetes Service
-
- Verify 'log_group' is not set to 'root' in
- /etc/audit/auditd.conf.
-
-
-
-
-
-
-
-
-
- Verify GRUB_DISABLE_RECOVERY Set to true
-
- Amazon Elastic Kubernetes Service
-
- GRUB_DISABLE_RECOVERY set to 'true' in
- /etc/default/grub
-
-
-
-
-
-
-
-
- Specify Multiple Remote chronyd NTP Servers for Time Data
-
- Amazon Elastic Kubernetes Service
-
- Multiple chronyd NTP Servers for time synchronization should be specified.
-
-
-
-
-
-
-
-
- GRUB_CMDLINE_LINUX_DEFAULT existance check
-
- Amazon Elastic Kubernetes Service
-
- Check if GRUB_CMDLINE_LINUX_DEFAULT exists in /etc/default/grub.
-
-
-
-
-
-
-
-
- Use $kernelopts in /boot/loader/entries/*.conf
-
- Amazon Elastic Kubernetes Service
-
- Ensure that grubenv-defined kernel options are referenced in individual boot loader entries
-
-
-
-
-
-
-
-
- Alibaba Cloud Linux 2
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Alibaba Cloud Linux 2
-
-
-
-
-
-
-
-
-
- Alibaba Cloud Linux 3
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Alibaba Cloud Linux 3
-
-
-
-
-
-
-
-
-
- CentOS 7
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- CentOS 7
-
-
-
-
-
-
-
-
-
- CentOS 8
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- CentOS 8
-
-
-
-
-
-
-
-
-
-
- CentOS Stream 9
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- CentOS Stream 9
-
-
-
-
-
-
-
-
-
-
- Debian
-
- Amazon Elastic Kubernetes Service
-
- The operating system installed is a Debian System
-
-
-
-
-
-
-
-
-
- Debian Linux 10
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Debian 10
-
-
-
-
-
-
-
-
-
- Debian Linux 11
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Debian 11
-
-
-
-
-
-
-
-
-
- Debian 9
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Debian 9
-
-
-
-
-
-
-
-
-
- Installed operating system is Fedora
-
- Amazon Elastic Kubernetes Service
-
-
-
-
-
- The operating system installed on the system is Fedora
-
-
-
-
-
-
-
-
-
-
- Oracle Linux 7
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Oracle Linux 7
-
-
-
-
-
-
-
-
-
-
-
- Oracle Linux 8
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Oracle Linux 8
-
-
-
-
-
-
-
-
-
-
-
- Oracle Linux 9
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Oracle Linux 9
-
-
-
-
-
-
-
-
-
-
-
- openSUSE
-
- Amazon Elastic Kubernetes Service
-
- The operating system installed on the system is openSUSE.
-
-
-
-
-
-
-
-
-
- openSUSE Leap 15
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is openSUSE Leap 15.
-
-
-
-
-
-
-
-
-
- openSUSE Leap 42
-
- Amazon Elastic Kubernetes Service
-
-
-
-
- The operating system installed on the system is openSUSE Leap 42.
-
-
-
-
-
-
-
-
-
- Installed operating system is part of the Unix family
-
- Amazon Elastic Kubernetes Service
-
- The operating system installed on the system is part of the Unix OS family
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux CoreOS
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Red Hat Enterprise Linux CoreOS release 4
-
-
-
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 7
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Red Hat Enterprise Linux 7
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Red Hat Enterprise Linux 8
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.0
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.0
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.1
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.1
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.2
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.2
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.3
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.3
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.4
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.4
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.5
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.5
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.6
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.6
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.7
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.7
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.8
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.8
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.9
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.9
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.10
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.10
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 9
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Red Hat Enterprise Linux 9
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Red Hat Virtualization 4
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Red Hat Virtualization Host 4.4+ or Red Hat Enterprise Host.
-
-
-
-
-
-
-
-
-
- Scientific Linux 7
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Scientific Linux 7
-
-
-
-
-
-
-
-
-
- SUSE Linux Enterprise 12
-
- Amazon Elastic Kubernetes Service
-
-
-
- The operating system installed on the system is
- SUSE Linux Enterprise 12.
-
-
-
-
-
-
-
-
-
-
-
-
-
- SUSE Linux Enterprise 15
-
- Amazon Elastic Kubernetes Service
-
-
-
- The operating system installed on the system is
- SUSE Linux Enterprise 15.
-
-
-
-
-
-
-
-
-
-
-
-
-
- Ubuntu
-
- Amazon Elastic Kubernetes Service
-
- The operating system installed is an Ubuntu System
-
-
-
-
-
-
-
-
-
-
- Ubuntu 1604
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Ubuntu 1604
-
-
-
-
-
-
-
-
-
- Ubuntu 1804
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Ubuntu 1804
-
-
-
-
-
-
-
-
-
- Ubuntu 2004
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Ubuntu 2004
-
-
-
-
-
-
-
-
-
- UnionTech OS Server 20
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is UnionTech OS Server 20
-
-
-
-
-
-
-
-
-
- Amazon Elastic Kubernetes Service
-
- Amazon Elastic Kubernetes Service
-
-
- The application installed installed on the system is EKS.
-
-
-
-
-
-
-
-
-
- Amazon Elastic Kubernetes Service 1.21
-
- Amazon Elastic Kubernetes Service
-
-
- The application installed installed on the system is Amazon Elastic Kubernetes Service 1.21.
-
-
-
-
-
-
-
-
-
- Amazon Elastic Kubernetes Service Node
-
- Amazon Elastic Kubernetes Service
-
-
- The application installed installed on the system is EKS 4.
-
-
-
-
-
-
-
-
- Red Hat Virtualization 4
-
- Amazon Elastic Kubernetes Service
-
-
- The application installed installed on the system is
- Red Hat Virtualization 4.
-
-
-
-
-
-
-
-
-
- Package audit is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package audit is installed.
-
-
-
-
-
-
-
-
-
- Package chrony is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package chrony is installed.
-
-
-
-
-
-
-
-
-
- Package gdm is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package gdm is installed.
-
-
-
-
-
-
-
-
-
- Package grub2 is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package grub2-common is installed.
-
-
-
-
-
-
-
-
-
-
-
-
-
- Package libuser is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package libuser is installed.
-
-
-
-
-
-
-
-
-
- Package providing /etc/login.defs is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package providing /etc/login.defs and is installed.
-
-
-
-
-
-
-
-
-
- Package net-snmp is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package net-snmp is installed.
-
-
-
-
-
-
-
-
-
- Check if the system doesn't act as an oVirt host or manager
-
- Amazon Elastic Kubernetes Service
-
- Check if the system has neither ovirt-host nor ovirt-engine installed.
-
-
-
-
-
-
-
-
- Package nss-pam-ldapd is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package nss-pam-ldapd is installed.
-
-
-
-
-
-
-
-
-
- Package ntp is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package ntp is installed.
-
-
-
-
-
-
-
-
-
- Check if the system acts as an oVirt host or manager
-
- Amazon Elastic Kubernetes Service
-
- Check if the system has ovirt-host or ovirt-engine installed
-
-
-
-
-
-
-
-
-
-
- Package pam is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package pam is installed.
-
-
-
-
-
-
-
-
-
- Package polkit is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package polkit is installed.
-
-
-
-
-
-
-
-
-
- Package postfix is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package postfix is installed.
-
-
-
-
-
-
-
-
-
- Package sssd-common is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package sssd-common is installed.
-
-
-
-
-
-
-
-
-
- Package sudo is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package sudo is installed.
-
-
-
-
-
-
-
-
-
- Package systemd is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package systemd is installed.
-
-
-
-
-
-
-
-
-
- Package tftp-server is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package tftp-server is installed.
-
-
-
-
-
-
-
-
-
- Package tmux is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package tmux is installed.
-
-
-
-
-
-
-
-
-
- Package usbguard is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package usbguard is installed.
-
-
-
-
-
-
-
-
-
- WiFi interface is present
-
- Amazon Elastic Kubernetes Service
-
- Checks if any wifi interface is present.
-
-
-
-
-
-
-
-
-
- Package yum is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package yum is installed.
-
-
-
-
-
-
-
-
-
- System uses zIPL
-
- Amazon Elastic Kubernetes Service
-
- Checks if system uses zIPL bootloader.
-
-
-
-
-
-
-
-
-
- Check if the scan target is a container
-
- Amazon Elastic Kubernetes Service
-
- Check for presence of files characterizing container filesystems.
-
-
-
-
-
-
-
-
-
-
- Check if the scan target is a machine
-
- Amazon Elastic Kubernetes Service
-
- Check for absence of files characterizing container filesystems.
-
-
-
-
-
-
-
-
-
- Kerberos server is older than 1.17-18
-
- Amazon Elastic Kubernetes Service
-
-
- Check if version of Kerberos server is lesser than 1.17-18
-
-
-
-
-
-
-
-
-
- Kerberos workstation is older than 1.17-18
-
- Amazon Elastic Kubernetes Service
-
-
- Check if version of Kerberos workstation is lesser than 1.17-18
-
-
-
-
-
-
-
-
-
- No CD/DVD drive is configured to automount in /etc/fstab
-
- Amazon Elastic Kubernetes Service
-
- Check the /etc/fstab and check if a CD/DVD drive
- is not configured for automount.
-
-
-
-
-
-
-
-
- Test that the architecture is aarch64
-
- Amazon Elastic Kubernetes Service
-
- Check that architecture of kernel in /proc/sys/kernel/osrelease is aarch64
-
-
-
-
-
-
-
-
- Test for different architecture than aarch64
-
- Amazon Elastic Kubernetes Service
-
- Check that architecture of kernel in /proc/sys/kernel/osrelease is not aarch64
-
-
-
-
-
-
-
-
- Test for different architecture than s390x
-
- Amazon Elastic Kubernetes Service
-
- Check that architecture of kernel in /proc/sys/kernel/osrelease is not s390x
-
-
-
-
-
-
-
-
- Test that the architecture is ppc64le
-
- Amazon Elastic Kubernetes Service
-
- Check that architecture of kernel in /proc/sys/kernel/osrelease is ppc64le
-
-
-
-
-
-
-
-
- Test that the architecture is s390x
-
- Amazon Elastic Kubernetes Service
-
- Check that architecture of kernel in /proc/sys/kernel/osrelease is s390x
-
-
-
-
-
-
-
-
- Device Files for Removable Media Partitions Does Not Exist on the System
-
- Amazon Elastic Kubernetes Service
-
- Verify if device file representing removable partitions
- exist on the system
-
-
-
-
-
-
-
-
- SSHD is not required to be installed or requirement not set
-
- Amazon Elastic Kubernetes Service
-
- If SSHD is not required, we check it is not installed. If SSH requirement is unset, we are good.
-
-
-
-
-
-
-
-
-
- SSHD is required to be installed or requirement not set
-
- Amazon Elastic Kubernetes Service
-
- If SSHD is required, we check it is installed. If SSH requirement is unset, we are good.
-
-
-
-
-
-
-
-
-
- It doesn't matter if sshd is installed or not
-
- Amazon Elastic Kubernetes Service
-
- Test if value sshd_required is 0.
-
-
-
-
-
-
-
-
- OpenSSH Server is 7.4 or newer
-
- Amazon Elastic Kubernetes Service
-
- Check if version of OpenSSH Server is equal or higher than 7.4
-
-
-
-
-
-
-
-
- SSSD is configured to use LDAP
-
- Amazon Elastic Kubernetes Service
-
- Identification provider is not set to ad within /etc/sssd/sssd.conf
-
-
-
-
-
-
-
-
-
- Non-UEFI system boot mode check
-
- Amazon Elastic Kubernetes Service
-
- Check if System boot mode is non-UEFI.
-
-
-
-
-
-
-
-
-
- UEFI system boot mode check
-
- Amazon Elastic Kubernetes Service
-
- Check if system boot mode is UEFI.
-
-
-
-
-
-
-
-
-
- Test for 64-bit Architecture
-
- Amazon Elastic Kubernetes Service
-
- Generic test for 64-bit architectures to be used by other tests
-
-
-
-
-
-
-
-
-
-
-
- Test for aarch_64 Architecture
-
- Amazon Elastic Kubernetes Service
-
- Generic test for aarch_64 architecture to be used by other tests
-
-
-
-
-
-
-
-
- Test for PPC and PPCLE Architecture
-
- Amazon Elastic Kubernetes Service
-
- Generic test for PPC PPC64LE architecture to be used by other tests
-
-
-
-
-
-
-
-
-
- Test for s390_64 Architecture
-
- Amazon Elastic Kubernetes Service
-
- Generic test for s390_64 architecture to be used by other tests
-
-
-
-
-
-
-
-
- Test for x86 Architecture
-
- Amazon Elastic Kubernetes Service
-
- Generic test for x86 architecture to be used by other tests
-
-
-
-
-
-
-
-
- Test for x86_64 Architecture
-
- Amazon Elastic Kubernetes Service
-
- Generic test for x86_64 architecture to be used by other tests
-
-
-
-
-
-
-
-
-
-
- Amazon Elastic Kubernetes Service
-
- Check /etc/tmux.conf is readable by others
-
-
-
-
-
-
-
-
- Check that file storing USBGuard rules exists and is not empty
-
- Amazon Elastic Kubernetes Service
-
- Check that file storing USBGuard rules at /etc/usbguard/rules.conf exists and is not empty
-
-
-
-
-
-
-
-
- Value of 'var_accounts_user_umask' variable represented as octal number
-
- Amazon Elastic Kubernetes Service
-
- Value of 'var_accounts_user_umask' variable represented as octal number
-
-
-
-
-
-
-
-
- Value of 'var_removable_partition' variable is set to '/dev/cdrom'
-
- Amazon Elastic Kubernetes Service
-
- Verify if value of 'var_removable_partition' variable is set
- to '/dev/cdrom'
-
-
-
-
-
-
-
-
- Value of 'var_umask_for_daemons' variable represented as octal number
-
- Amazon Elastic Kubernetes Service
-
- Value of 'var_umask_for_daemons' variable represented as octal number
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- [:]
-
-
-
- [:].metadata.name
-
-
- oval:ssg-local_variable_counter_configure_network_policies_namespaces:var:1
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
- oval:ssg-symlink_file_groupowner_kubelet_conf_uid_0:ste:1
- oval:ssg-state_file_groupowner_kubelet_conf_gid_0_0:ste:1
-
-
- /var/lib/kubelet/kubeconfig
- oval:ssg-symlink_file_groupowner_worker_kubeconfig_uid_0:ste:1
- oval:ssg-state_file_groupowner_worker_kubeconfig_gid_0_0:ste:1
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
- oval:ssg-symlink_file_owner_kubelet_conf_uid_0:ste:1
- oval:ssg-state_file_owner_kubelet_conf_uid_0_0:ste:1
-
-
- /var/lib/kubelet/kubeconfig
- oval:ssg-symlink_file_owner_worker_kubeconfig_uid_0:ste:1
- oval:ssg-state_file_owner_worker_kubeconfig_uid_0_0:ste:1
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
- oval:ssg-exclude_symlinks__kubelet_conf:ste:1
- oval:ssg-state_file_permissions_kubelet_conf_0_mode_0644or_stricter_:ste:1
-
-
- /var/lib/kubelet/kubeconfig
- oval:ssg-exclude_symlinks__worker_kubeconfig:ste:1
- oval:ssg-state_file_permissions_worker_kubeconfig_0_mode_0644or_stricter_:ste:1
-
-
-
-
-
-
- .authentication.anonymous.enabled
-
-
-
-
-
-
- .authorization.mode
-
-
-
-
-
-
- .authentication.x509.clientCAFile
-
-
-
-
-
-
- .hostname-override
-
-
-
-
-
-
- .rotateCertificates
-
-
-
-
-
-
- .featureGates.RotateKubeletClientCertificate
-
-
-
-
-
-
- .makeIPTablesUtilChains
-
-
-
-
-
-
- .protectKernelDefaults
-
-
-
-
-
-
- .featureGates.RotateKubeletServerCertificate
-
-
-
-
-
-
- .streamingConnectionIdleTimeout
-
-
-
-
-
-
- .readOnlyPort
-
-
- GConf2
-
-
- avahi
-
-
- dconf
-
-
- esc
-
-
- gdm
-
-
- pam_ldap
-
-
- prelink
-
-
- samba-common
-
-
- ^syslog\.(service|socket)$
- ActiveState
-
-
- ^syslog\.(service|socket)$
- LoadState
-
-
- rsyslog
-
-
- /etc/ssh/sshd_config
- ^[\s]*Include /etc/ssh/sshd_config\.d/\*\.conf[\s]*$
- 1
-
-
- /etc/pam.d/system-auth
- ^\s*password\s+(?:(?:required)|(?:requisite))\s+pam_faillock\.so.*$
- 1
-
-
-
- ^\s*password\s+(?:(?:required)|(?:requisite))\s+pam_pwquality\.so.*$
- 1
-
-
- /usr/lib/systemd/system/auditd.service
- ^ExecStartPost=\-\/sbin\/auditctl.*$
- 1
-
-
- /usr/lib/systemd/system/auditd.service
- ^(ExecStartPost=\-\/sbin\/augenrules.*$|Requires=augenrules.service)
- 1
-
-
- ^/etc/audit/rules\.d/.*\.rules$
- ^[\s]*-a[\s]+always,exit[\s]+(?:.*-F[\s]+arch=b32[\s]+)(?:.*(-S[\s]+setdomainname[\s]+|([\s]+|[,])setdomainname([\s]+|[,]))).*(-k[\s]+|-F[\s]+key=)[\S]+[\s]*$
- 1
-
-
- ^/etc/audit/rules\.d/.*\.rules$
- ^[\s]*-a[\s]+always,exit[\s]+(?:.*-F[\s]+arch=b64[\s]+)(?:.*(-S[\s]+setdomainname[\s]+|([\s]+|[,])setdomainname([\s]+|[,]))).*(-k[\s]+|-F[\s]+key=)[\S]+[\s]*$
- 1
-
-
- /etc/audit/audit.rules
- ^[\s]*-a[\s]+always,exit[\s]+(?:.*-F[\s]+arch=b32[\s]+)(?:.*(-S[\s]+setdomainname[\s]+|([\s]+|[,])setdomainname([\s]+|[,]))).*(-k[\s]+|-F[\s]+key=)[\S]+[\s]*$
- 1
-
-
- /etc/audit/audit.rules
- ^[\s]*-a[\s]+always,exit[\s]+(?:.*-F[\s]+arch=b64[\s]+)(?:.*(-S[\s]+setdomainname[\s]+|([\s]+|[,])setdomainname([\s]+|[,]))).*(-k[\s]+|-F[\s]+key=)[\S]+[\s]*$
- 1
-
-
- ^/etc/audit/rules\.d/.*\.rules$
- ^[\s]*-a[\s]+always,exit[\s]+(?:.*-F[\s]+arch=b32[\s]+)(?:.*(-S[\s]+sethostname[\s]+|([\s]+|[,])sethostname([\s]+|[,]))).*(-k[\s]+|-F[\s]+key=)[\S]+[\s]*$
- 1
-
-
- ^/etc/audit/rules\.d/.*\.rules$
- ^[\s]*-a[\s]+always,exit[\s]+(?:.*-F[\s]+arch=b64[\s]+)(?:.*(-S[\s]+sethostname[\s]+|([\s]+|[,])sethostname([\s]+|[,]))).*(-k[\s]+|-F[\s]+key=)[\S]+[\s]*$
- 1
-
-
- /etc/audit/audit.rules
- ^[\s]*-a[\s]+always,exit[\s]+(?:.*-F[\s]+arch=b32[\s]+)(?:.*(-S[\s]+sethostname[\s]+|([\s]+|[,])sethostname([\s]+|[,]))).*(-k[\s]+|-F[\s]+key=)[\S]+[\s]*$
- 1
-
-
- /etc/audit/audit.rules
- ^[\s]*-a[\s]+always,exit[\s]+(?:.*-F[\s]+arch=b64[\s]+)(?:.*(-S[\s]+sethostname[\s]+|([\s]+|[,])sethostname([\s]+|[,]))).*(-k[\s]+|-F[\s]+key=)[\S]+[\s]*$
- 1
-
-
- /etc/audit/auditd.conf
- ^(log_file\s*=\s*.*)$
- 1
-
-
- /etc/audit/auditd.conf
- ^[ ]*log_group[ ]+=[ ]+root[ ]*$
- 1
-
-
- /etc/audit/auditd.conf
- ^[ ]*log_group[ ]+=.*$
- 1
-
-
- /etc/default/grub
- ^\s*GRUB_DISABLE_RECOVERY=(.*)$
- 1
-
-
- ^/etc/chrony\.(conf|d/.+\.conf)$
- ^([\s]*server[\s]+.+$){2,}$
- 1
-
-
- /etc/default/grub
- ^\s*GRUB_CMDLINE_LINUX_DEFAULT=.*$
- 1
-
-
- /boot/loader/entries/
- ^.*\.conf$
- ^options(?:\s+.*)?\s+\$kernelopts\b.*$
- 1
-
-
- alinux-release
-
-
- alinux-release
-
-
- centos-release
-
-
- /etc/os-release
- ^ID="(\w+)"$
- 1
-
-
- /etc/os-release
- ^VERSION_ID="(\d)"$
- 1
-
-
- /etc/os-release
- ^ID="(\w+)"$
- 1
-
-
- /etc/os-release
- ^VERSION_ID="(\d)"$
- 1
-
-
- /etc/debian_version
-
-
- /etc/debian_version
- ^10.[0-9]+$
- 1
-
-
- /etc/debian_version
- ^11.[0-9]+$
- 1
-
-
- /etc/debian_version
- ^9.[0-9]+$
- 1
-
-
- fedora-release.*
-
-
- /etc/system-release-cpe
- ^cpe:\/o:fedoraproject:fedora:[\d]+$
- 1
-
-
- oraclelinux-release
-
-
- oraclelinux-release
-
-
- oraclelinux-release
-
-
- openSUSE-release
-
-
- openSUSE-release
-
-
- openSUSE-release
-
-
-
- /etc/os-release
- ^ID="(\w+)"$
- 1
-
-
- /etc/os-release
- ^VERSION_ID="(\d)\.\d+"$
- 1
-
-
-
- redhat-release-client
-
-
- redhat-release-workstation
-
-
- redhat-release-server
-
-
- redhat-release-computenode
-
-
- /etc/redhat-release
- ^Red Hat Enterprise Linux release (\d)\.\d+$
- 1
-
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- /etc/redhat-release
- ^Red Hat Enterprise Linux release (\d)\.\d+$
- 1
-
-
-
- redhat-release
-
-
- /etc/redhat-release
- ^Red Hat Enterprise Linux release (\d)\.\d+$
- 1
-
-
- redhat-release-virtualization-host
-
-
- sl-release
-
-
-
- sled-release
-
-
- sles-release
-
-
- SLES_SAP-release
-
-
-
- sled-release
-
-
- sles-release
-
-
- SLES_SAP-release
-
-
- /etc/lsb-release
-
-
- /etc/lsb-release
- ^DISTRIB_ID=Ubuntu$
- 1
-
-
- /etc/lsb-release
- ^DISTRIB_CODENAME=xenial$
- 1
-
-
- /etc/lsb-release
- ^DISTRIB_CODENAME=bionic$
- 1
-
-
- /etc/lsb-release
- ^DISTRIB_CODENAME=focal$
- 1
-
-
- uos-release
-
-
-
-
-
-
- .gitVersion
-
-
- /var/lib/kubelet/kubeconfig
-
-
- rhvm-appliance
-
-
- audit
-
-
- chrony
-
-
- gdm
-
-
- grub2-common
-
-
- /sys/firmware/opal
-
-
- libuser
-
-
- shadow-utils
-
-
- net-snmp
-
-
- nss-pam-ldapd
-
-
- ntp
-
-
- ovirt-host
-
-
- ovirt-engine
-
-
- pam
-
-
- polkit
-
-
- postfix
-
-
- sssd-common
-
-
- sudo
-
-
- systemd
-
-
- tftp-server
-
-
- tmux
-
-
- usbguard
-
-
- /proc/net/wireless
-
-
- yum
-
-
- s390utils-base
-
-
- /.dockerenv
-
-
- /run/.containerenv
-
-
- krb5-server
-
-
- krb5-workstation
-
-
- /etc/fstab
-
- 1
-
-
- /proc/sys/kernel/osrelease
- ^.*\.(.*)$
- 1
-
-
- /proc/sys/kernel/osrelease
- ^.*\.(.*)$
- 1
-
-
- /proc/sys/kernel/osrelease
- ^.*\.(.*)$
- 1
-
-
-
-
-
- oval:ssg-sshd_required:var:1
-
-
- oval:ssg-sshd_required:var:1
-
-
- oval:ssg-sshd_required:var:1
-
-
- openssh-server
-
-
- /etc/sssd/sssd.conf
- ^[\s]*\[domain\/[^]]*]([^\n\[\]]*\n+)+?[\s]*id_provider[ \t]*=[ \t]*((?i)ad)[ \t]*$
- 1
-
-
- /sys/firmware/efi
-
-
-
-
-
-
-
-
-
- /etc/tmux.conf
-
-
- ^/etc/usbguard/(rules|rules\.d/.*)\.conf$
- ^.*\S+.*$
- 1
-
-
- oval:ssg-var_accounts_user_umask_umask_as_number:var:1
-
-
- oval:ssg-var_removable_partition:var:1
-
-
- oval:ssg-var_umask_for_daemons_umask_as_number:var:1
-
-
-
-
-
-
-
- 0
-
-
- symbolic link
-
-
- 0
-
-
- symbolic link
-
-
- 0
-
-
- symbolic link
-
-
- 0
-
-
- symbolic link
-
-
- false
- false
- false
- false
- false
- false
- false
- false
-
-
- symbolic link
-
-
- false
- false
- false
- false
- false
- false
- false
- false
-
-
- symbolic link
-
-
-
- false
-
-
-
-
- AlwaysAllow
-
-
-
-
- /etc/kubernetes/pki/ca.crt
-
-
-
-
- .*
-
-
-
-
- true
-
-
-
-
- false
-
-
-
-
- true
-
-
-
-
- true
-
-
-
-
- true
-
-
-
-
-
-
-
-
-
- 0
-
-
-
- inactive|failed
-
-
- masked
-
-
- ^(true|"true")$
-
-
- ^2.*$
-
-
- ^3.*$
-
-
- ^7.*$
-
-
- centos
-
-
- 8
-
-
- centos
-
-
- 9
-
-
- ^7.*$
-
-
- ^8.*$
-
-
- ^9.*$
-
-
- openSUSE-release
-
-
- ^15.*$
-
-
- ^42.*$
-
-
- unix
-
-
- rhcos
-
-
- 4
-
-
- unix
-
-
- ^7.*$
-
-
- ^7.*$
-
-
- ^7.*$
-
-
- ^7.*$
-
-
- 7
-
-
- unix
-
-
- ^8.*$
-
-
- ^8.0*$
-
-
- ^8.1*$
-
-
- ^8.2*$
-
-
- ^8.3*$
-
-
- ^8.4*$
-
-
- ^8.5*$
-
-
- ^8.6*$
-
-
- ^8.7*$
-
-
- ^8.8*$
-
-
- ^8.9*$
-
-
- ^8.10*$
-
-
- 8
-
-
- unix
-
-
- ^9.*$
-
-
- 9
-
-
- 0:4.4
-
-
- ^7.*$
-
-
- unix
-
-
- ^12.*$
-
-
- ^12.*$
-
-
- ^12.*$
-
-
- unix
-
-
- ^15.*$
-
-
- ^15.*$
-
-
- ^15.*$
-
-
- ^20.*$
-
-
-
- ^.*-eks-.*$
-
-
-
-
- ^v1\.21\..*
-
-
-
- ^4.*$
-
-
- 0:1.17-18
-
-
- 0:1.17-18
-
-
- ^aarch64$
-
-
- ^ppc64le$
-
-
- ^s390x$
-
-
- 1
-
-
- 2
-
-
- 0
-
-
- 0:7.4
-
-
- aarch64
-
-
- ppc64
-
-
- ppc64le
-
-
- s390x
-
-
- i686
-
-
- x86_64
-
-
- true
-
-
- /dev/cdrom
-
-
-
-
-
-
- /apis/networking.k8s.io/v1/networkpolicies#51742b3e87275db9eb7fc6c0286a9e536178a2a83e3670b615ceaf545e7fd300
-
-
-
-
-
- /api/v1/namespaces#34d4beecc95c65d815d9d48fd4fdcb0c521631852ad088ef74e36d012b0e1e0d
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
-
- /etc/kubernetes/kubelet/kubelet-config.json
-
-
- /etc/pam.d/system-auth
-
-
-
-
-
-
-
- /kubernetes-api-resources/version
-
-
- /dev/cdrom
- /dev/dvd
- /dev/scd0
- /dev/sr0
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 64
-
-
-
- 8
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 64
-
-
-
- 8
-
-
-
-
-
-
-
-
-
-
-
- build_shorthand.py from SCAP Security Guide
- ssg: 0.1.64
- 2.0
- 2022-08-11T18:55:40
-
-
-
- Use Dedicated Service Accounts
-
- ocil:ssg-dedicated_service_accounts_action:testaction:1
-
-
-
- Manage Users with AWS IAM
-
- ocil:ssg-iam_integration_action:testaction:1
-
-
-
- Consider Fargate for Untrusted Workloads
-
- ocil:ssg-fargate_action:testaction:1
-
-
-
- Disable Anonymous Authentication to the Kubelet
-
- ocil:ssg-kubelet_anonymous_auth_worker_action:testaction:1
-
-
-
- Ensure authorization is set to Webhook
-
- ocil:ssg-kubelet_authorization_mode_worker_action:testaction:1
-
-
-
- kubelet - Configure the Client CA Certificate
-
- ocil:ssg-kubelet_configure_client_ca_worker_action:testaction:1
-
-
-
- kubelet - Enable Certificate Rotation
-
- ocil:ssg-kubelet_enable_cert_rotation_worker_action:testaction:1
-
-
-
- kubelet - Enable Client Certificate Rotation
-
- ocil:ssg-kubelet_enable_client_cert_rotation_action:testaction:1
-
-
-
- kubelet - Allow Automatic Firewall Configuration
-
- ocil:ssg-kubelet_enable_iptables_util_chains_action:testaction:1
-
-
-
- kubelet - Enable Protect Kernel Defaults
-
- ocil:ssg-kubelet_enable_protect_kernel_defaults_action:testaction:1
-
-
-
- kubelet - Enable Server Certificate Rotation
-
- ocil:ssg-kubelet_enable_server_cert_rotation_worker_action:testaction:1
-
-
-
- kubelet - Do Not Disable Streaming Timeouts
-
- ocil:ssg-kubelet_enable_streaming_connections_worker_action:testaction:1
-
-
-
- kubelet - Ensure that the --read-only-port is secured
-
- ocil:ssg-kubelet_read_only_port_secured_worker_action:testaction:1
-
-
-
- Ensure Audit Logging is Enabled
-
- ocil:ssg-audit_logging_action:testaction:1
-
-
-
- Ensure that application Namespaces have Network Policies defined.
-
- ocil:ssg-configure_network_policies_namespaces_action:testaction:1
-
-
-
- Ensure Network Policy is Enabled
-
- ocil:ssg-configure_network_policy_action:testaction:1
-
-
-
- Encrypt Traffic to Load Balancers and Workloads
-
- ocil:ssg-configure_tls_action:testaction:1
-
-
-
- Restrict Access to the Control Plane Endpoint
-
- ocil:ssg-control_plane_access_action:testaction:1
-
-
-
- Ensure Private Endpoint Access
-
- ocil:ssg-endpoint_configuration_action:testaction:1
-
-
-
- Ensure Cluster Private Nodes
-
- ocil:ssg-private_nodes_action:testaction:1
-
-
-
- Only use approved container registries
-
- ocil:ssg-approved_registries_action:testaction:1
-
-
-
- Ensure Image Vulnerability Scanning
-
- ocil:ssg-image_scanning_action:testaction:1
-
-
-
- Ensure Cluster Service Account with read-only access to Amazon ECR
-
- ocil:ssg-read_only_registry_access_action:testaction:1
-
-
-
- Minimize user access to Amazon ECR
-
- ocil:ssg-registry_access_action:testaction:1
-
-
-
- Ensure Kubernetes Secrets are Encrypted
-
- ocil:ssg-secret_encryption_action:testaction:1
-
-
-
- Verify Group Who Owns The Kubelet Configuration File
-
- ocil:ssg-file_groupowner_kubelet_conf_action:testaction:1
-
-
-
- Verify Group Who Owns The Worker Kubeconfig File
-
- ocil:ssg-file_groupowner_worker_kubeconfig_action:testaction:1
-
-
-
- Verify User Who Owns The Kubelet Configuration File
-
- ocil:ssg-file_owner_kubelet_conf_action:testaction:1
-
-
-
- Verify User Who Owns The Worker Kubeconfig File
-
- ocil:ssg-file_owner_worker_kubeconfig_action:testaction:1
-
-
-
- Verify Permissions on The Kubelet Configuration File
-
- ocil:ssg-file_permissions_kubelet_conf_action:testaction:1
-
-
-
- Verify Permissions on the Worker Kubeconfig File
-
- ocil:ssg-file_permissions_worker_kubeconfig_action:testaction:1
-
-
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
- PASS
-
-
- FAIL
-
-
-
-
-
- Audit:
-
-For each namespace in the cluster, review the rights assigned to the default
-service account and ensure that it has no roles or cluster roles bound to it
-apart from the defaults. Additionally ensure that the
-automountServiceAccountToken: false setting is in place for each
-default service account.
-
-Remediation:
-
-With IAM roles for service accounts on Amazon EKS clusters, you can associate
-an IAM role with a Kubernetes service account. This service account can then
-provide AWS permissions to the containers in any pod that uses that service
-account. With this feature, you no longer need to provide extended
-permissions to the worker node IAM role so that pods on that node can call
-AWS APIs.
-Applications must sign their AWS API requests with AWS credentials. This
-feature provides a strategy for managing credentials for your applications,
-similar to the way that Amazon EC2 instance profiles provide credentials to
-Amazon EC2 instances. Instead of creating and distributing your AWS
-credentials to the containers or using the Amazon EC2 instance’s role, you
-can associate an IAM role with a Kubernetes service account. The applications
-in the pod’s containers can then use an AWS SDK or the AWS CLI to make API
-requests to authorized AWS services.
-
-The IAM roles for service accounts feature provides the following benefits:
-
-
- Least privilege — By using the IAM roles for service accounts feature,
- you no longer need to provide extended permissions to the worker node IAM
- role so that pods on that node can call AWS APIs. You can scope IAM
- permissions to a service account, and only pods that use that service
- account have access to those permissions. This feature also eliminates the
- need for third-party solutions such as kiam or kube2iam.
- Credential isolation — A container can only retrieve credentials for
- the IAM role that is associated with the service account to which it
- belongs. A container never has access to credentials that are intended for
- another container that belongs to another pod.
- Auditability — Access and event logging is available through CloudTrail
- to help ensure retrospective auditing.
-
-
-To get started, see Enabling IAM roles for service accounts on your cluster.
-For an end-to-end walkthrough using eksctl, see Walkthrough: Updating
-a DaemonSet to use IAM for service accounts.
- Is it the case that dedicated service accounts are used?
-
-
-
- Audit:
-
-To Audit access to the namespace $NAMESPACE, assume the IAM role
-yourIAMRoleName for a user that you created, and then run the following
-command:
-
-$ kubectl get role -n $NAMESPACE
-The response lists the RBAC role that has access to this Namespace.
-
-Remediation:
-
-Refer to the 'Managing users or IAM roles for your cluster' in Amazon EKS
-documentation.
-
-https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
- Is it the case that authorization and authentication is managed using AWS IAM?
-
-
-
- Audit:
-Check the existence of Fargate profiles in the Amazon EKS cluster by using:
-
-aws --region ${AWS_REGION} eks list-fargate-profiles --cluster-name ${CLUSTER_NAME}
-Alternatively, to audit for the presence of a Fargate profile node run the
-following command:
-kubectl get nodes
-The response should include a NAME entry starting with "fargate-ip" for
-example:
-NAME STATUS ROLES AGE VERSION
-fargate-ip-192-168-104-74.us-east-2.compute.internal Ready 2m15s v1.14.8-eks
-
-Remediation:
-
-Create a Fargate profile for your cluster
-
-Before you can schedule pods running on Fargate in your cluster, you must define a Fargate
-profile that specifies which pods should use Fargate when they are launched. For more
-information, see AWS Fargate profile.
-
-Note
-If you created your cluster with eksctl using the --fargate option,
-then a Fargate profile has already been created for your cluster with
-selectors for all pods in the kube-system and default namespaces.
-Use the following procedure to create Fargate profiles for any other
-namespaces you would like to use with Fargate.
-
-via eksctl CLI
-
-Create your Fargate profile with the following eksctl command, replacing the
-variable text with your own values. You must specify a namespace, but the
-labels option is not required.
-eksctl create fargateprofile --cluster cluster_name --name
-fargate_profile_name --namespace kubernetes_namespace --labels key=value
-
-via AWS Management Console
-
-To create a Fargate profile for a cluster with the AWS Management Console
-
-
- Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.
- Choose the cluster to create a Fargate profile for.
- Under Fargate profiles, choose Add Fargate profile.
- On the Configure Fargate profile page, enter the following information
- and choose. For Name, enter a unique name for your Fargate profile. For Pod
- execution role, choose the pod execution role to use with your Fargate
- profile. Only IAM roles with the eks-fargate-pods.amazonaws.com service principal
- are shown. If you do not see any roles listed here, you must create one. For more
- information, see Pod execution role. For Subnets, choose the subnets to use
- for your pods. By default, all subnets in your cluster's VPC are selected.
- Only private subnets are supported for pods running on Fargate; you must
- deselect any public subnets. For Tags, you can optionally tag your Fargate
- profile. These tags do not propagate to other resources associated with the
- profile, such as its pods.
- 5. On the Configure pods selection page, enter the following
- information and choose Next. list text hereFor Namespace, enter a namespace
- to match for pods, such as kube-system or default. list text
- here(Optional) Add Kubernetes labels to the selector that pods in the
- specified namespace must have to match the selector. For example, you could
- add the label infrastructure: fargate to the selector so that only pods in
- the specified namespace that also have the infrastructure: fargate
- Kubernetes label match the selector.
- On the Review and create page, review the information for your Fargate
- profile and choose Create.
-
- Is it the case that untrusted workloads are isolated?
-
-
-
- Run the following command on the kubelet node(s):
-$ sudo grep -A1 anonymous /etc/kubernetes/kubelet/kubelet-config.json
-The output should return enabled: false.
- Is it the case that <tt>anonymous</tt> authentication is not set to <tt>false</tt>?
-
-
-
- Run the following command on the kubelet node(s):
-$ sudo grep -A1 authorization /etc/kubernetes/kubelet/kubelet-config.json
-Verify that the output is not set to mode: AlwaysAllow, or missing
-(defaults to mode: Webhook).
- Is it the case that <tt>authorization-mode</tt> is not configured to <tt>Webhook</tt>?
-
-
-
- Run the following command on the kubelet node(s):
-$ sudo grep -A1 x509 /etc/kubernetes/kubelet/kubelet-config.json
-The output should contain a configured certificate like /etc/kubernetes/pki/ca.crt.
- Is it the case that no client CA certificate has been configured?
-
-
-
- Run the following command on the kubelet node(s):
-$ sudo grep rotateCertificates /etc/kubernetes/kubelet/kubelet-config.json
-The output should return nothing or true.
- Is it the case that the kubelet cannot rotate client certificate?
-
-
-
- Run the following command on the kubelet node(s):
-$ sudo grep RotateKubeletClientCertificate /etc/kubernetes/kubelet/kubelet-config.json
-The output should return nothing or true.
- Is it the case that the kubelet cannot rotate client certificate?
-
-
-
- Run the following command on the kubelet node(s):
-$ sudo grep makeIPTablesUtilChains /etc/kubernetes/kubelet/kubelet-config.json
-The output should return true.
- Is it the case that the kubelet cannot modify the firewall settings?
-
-
-
- Run the following command on the kubelet node(s):
-$ sudo grep protectKernelDefaults /etc/kubernetes/kubelet/kubelet-config.json
-The output should return true.
- Is it the case that the kubelet can modify kernel parameters?
-
-
-
- Run the following command on the kubelet node(s):
-$ sudo grep RotateKubeletServerCertificate /etc/kubernetes/kubelet/kubelet-config.json
-The output should return true.
- Is it the case that the kubelet cannot rotate server certificate?
-
-
-
- Run the following command on the kubelet node(s):
-$ sudo grep streamingConnectionIdleTimeout /etc/kubernetes/kubelet/kubelet-config.json
-The output should return .
- Is it the case that the streaming connection timeouts are not disabled?
-
-
-
- First, SSH to the relevant node.
-
-Open the Kubelet config file:
-
- cat /etc/kubernetes/kubelet/kubelet-config.json
-
-Verify that the "readOnlyPort" argument exists and is set to 0
- Is it the case that readOnlyPort is not secured?
-
-
-
- Perform the following to determine if CloudTrail is enabled for all regions:
-Via the Management Console
-1. Sign in to the AWS Management Console and open the EKS console at https://console.aws.amazon.com/eks
-2. Click on Cluster Name of the cluster you are auditing
-3. Click Logging
-4. Ensure all 5 choices are set to Enabled
-Via CLI
-aws --region "${REGION_CODE}" eks describe-cluster --name "${CLUSTER_NAME}" --query 'cluster.logging.clusterLogging[?enabled==true].types'
-
-Perform the following to determine if CloudTrail is enabled for all regions:
-Via The Management Console
-1. Sign in to the AWS Management Console and open the EKS console at https://console.aws.amazon.com/eks
-2. Click on Cluster Name of the cluster you are auditing
-3. Click Logging
-4. Select Manage Logging from the button on the right hand side
-5. Toggle each selection to the Enabled position.
-6. Click Save Changes
- Is it the case that audit logging is enable?
-
-
-
- Verify that the every non-control plane namespace has an appropriate
-NetworkPolicy.
-
-To get all the non-control plane namespaces, you can do the
-following command oc get namespaces -o json | jq '[.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default") | .metadata.name ]'
-
-To get all the non-control plane namespaces with a NetworkPolicy, you can do the
-following command oc get --all-namespaces networkpolicies -o json | jq '[.items[] | select((.metadata.namespace | startswith("openshift") | not) and (.metadata.namespace | startswith("kube-") | not) and .metadata.namespace != "default") | .metadata.namespace] | unique'
-
-Make sure that the namespaces displayed in the commands of the commands match.
- Is it the case that Namespaced Network Policies needs review?
-
-
-
- Network Policy requires the Network Policy add-on. This add-on is included
-automatically when a cluster with Network Policy is created, but for an
-existing cluster, needs to be added prior to enabling Network Policy.
-
-Enabling/Disabling Network Policy causes a rolling update of all cluster
-nodes, similar to performing a cluster upgrade. This operation is
-long-running and will block other operations on the cluster (including
-delete) until it has run to completion.
-
-If Network Policy is used, a cluster must have at least 2 nodes of type
-n1-standard-1 or higher. The recommended minimum size cluster to run
-Network Policy enforcement is 3 n1-standard-1 instances.
-
-Enabling Network Policy enforcement consumes additional resources in nodes.
-Specifically, it increases the memory footprint of the kube-system
-process by approximately 128MB, and requires approximately 300 millicores of
-CPU.
- Is it the case that network policy is enabled?
-
-
-
- For more information about protecting your workloads using TLS please refer
-to the AWS User Guide:
-
-https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/data-protection.html
- Is it the case that connections to load balancers and workloads are encrypted with TLS?
-
-
-
- Audit:
-Input:
-
-aws eks describe-cluster \
---region region \
---name clustername
-Output:
-...
-"endpointPublicAccess": false,
-"endpointPrivateAccess": true,
-"publicAccessCidrs": [
-"203.0.113.5/32"
-]
-...
-
-Remediation:
-Complete the following steps using the AWS CLI version 1.18.10 or later. You
-can check your current version with aws --version. To install or
-upgrade the AWS CLI, see Installing the AWS CLI.
-
-Update your cluster API server endpoint access with the following AWS CLI
-command. Substitute your cluster name and desired endpoint access values. If
-you set endpointPublicAccess=true, then you can (optionally) enter
-single CIDR block, or a comma- separated list of CIDR blocks for
-publicAccessCidrs. The blocks cannot include reserved addresses. If you
-specify CIDR blocks, then the public API server endpoint will only receive
-requests from the listed blocks. There is a maximum number of CIDR blocks
-that you can specify. For more information, see Amazon EKS Service Quotas. If
-you restrict access to your public endpoint using CIDR blocks, it is
-recommended that you also enable private endpoint access so that worker nodes
-and Fargate pods (if you use them) can communicate with the cluster. Without
-the private endpoint enabled, your public access endpoint CIDR sources must
-include the egress sources from your VPC. For example, if you have a worker
-node in a private subnet that communicates to the internet through a NAT
-Gateway, you will need to add the outbound IP address of the NAT gateway as
-part of a whitelisted CIDR block on your public endpoint. If you specify no
-CIDR blocks, then the public API server endpoint receives requests from all
-(0.0.0.0/0) IP addresses.
-
-Note
-The following command enables private access and public access from a single IP address
-for the API server endpoint. Replace 203.0.113.5/32 with a single CIDR block, or a comma-
-separated list of CIDR blocks that you want to restrict network access to.
-
-Example command:
-
-aws eks update-cluster-config \
---region region-code \
---name dev \
---resources-vpc-config \
-endpointPublicAccess=true, \
-publicAccessCidrs="203.0.113.5/32",\
-endpointPrivateAccess=true
- Is it the case that the control plane endpoint is secure?
-
-
-
- Configure the EKS cluster endpoint to be private. See Modifying Cluster
-Endpoint Access for further information on this topic.
-https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
- Is it the case that private acces is enabled and public access is disabled?
-
-
-
- To enable Private Nodes, the cluster has to also be configured with a private
-master IP range and IP Aliasing enabled. Private Nodes do not have outbound
-access to the public internet.
-
-If you want to provide outbound Internet access for your private nodes, you
-can use Cloud NAT or you can manage your own NAT gateway.
- Is it the case that clusters are created with private nodes?
-
-
-
- Ensure all containers and images are coming from approved registries.
-
-References:
-
-https://aws.amazon.com/blogs/opensource/using-open-policy-agent-on-amazon-eks/
- Is it the case that container images come from approved registries?
-
-
-
- Please follow AWS ECS or your 3rd party image scanning provider's guidelines
-for enabling Image Scanning.
-
-Remediation:
-
-To utilize AWS ECR for Image scanning please follow the steps below:
-
-To create a repository configured for scan on push (AWS CLI)
-
-aws ecr create-repository --repository-name $REPO_NAME --image-scanning- configuration scanOnPush=true --region $REGION_CODE
-
-To edit the settings of an existing repository (AWS CLI)
-
-aws ecr put-image-scanning-configuration --repository-name $REPO_NAME -- image-scanning-configuration scanOnPush=true --region $REGION_CODE
-
-Use the following steps to start a manual image scan using the AWS Management Console.
-
-1. Open the Amazon ECR console at https://console.aws.amazon.com/ecr/repositories.
-2. From the navigation bar, choose the Region to create your repository in.
-3. In the navigation pane, choose Repositories.
-4. On the Repositories page, choose the repository that contains the image to scan.
-5. On the Images page, select the image to scan and then choose Scan.
- Is it the case that image vulnerability scanning is enabled?
-
-
-
- Review AWS ECS worker node IAM role (NodeInstanceRole) IAM Policy Permissions
-to verify that they are set and the minimum required level. If utilizing a
-3rd party tool to scan images utilize the minimum required permission level
-required to interact with the cluster - generally this should be read-only.
-
-Remediation:
-
-You can use your Amazon ECR images with Amazon EKS, but you need to satisfy
-the following prerequisites.
-The Amazon EKS worker node IAM role (NodeInstanceRole) that you use with your
-worker nodes must possess the following IAM policy permissions for Amazon
-ECR.
-
-
-{
- "Version": "2012-10-17",
- "Statement": [
- {
- "Effect": "Allow",
- "Action": [
- "ecr:BatchCheckLayerAvailability",
- "ecr:BatchGetImage",
- "ecr:GetDownloadUrlForLayer",
- "ecr:GetAuthorizationToken"
- ],
- "Resource": "*"
- }
- ]
-}
-
- Is it the case that Cluster Service Account has read-only access to Amazon ECR?
-
-
-
- Remediation:
-
-Before you use IAM to manage access to Amazon ECR, you should understand what
-IAM features are available to use with Amazon ECR. To get a high-level view
-of how Amazon ECR and other AWS services work with IAM, see AWS Services That
-Work with IAM in the IAM User Guide.
-
-Topics
-
-
-Amazon ECR Identity-Based Policies
-Amazon ECR Resource-Based Policies
-Authorization Based on Amazon ECR Tags
-Amazon ECR IAM Roles
-
-
-Amazon ECR Identity-Based Policies
-
-With IAM identity-based policies, you can specify allowed or denied actions
-and resources as well as the conditions under which actions are allowed or
-denied. Amazon ECR supports specific actions, resources, and condition keys.
-To learn about all of the elements that you use in a JSON policy, see IAM
-JSON Policy Elements Reference in the IAM User Guide.
-
-Actions
-
-The Action element of an IAM identity-based policy describes the specific
-action or actions that will be allowed or denied by the policy. Policy
-actions usually have the same name as the associated AWS API operation. The
-action is used in a policy to grant permissions to perform the associated
-operation.
-
-Policy actions in Amazon ECR use the following prefix before the action:
-ecr:. For example, to grant someone permission to create an Amazon ECR
-repository with the Amazon ECR CreateRepository API operation, you include
-the ecr:CreateRepository action in their policy. Policy statements must
-include either an Action or NotAction element. Amazon ECR defines its own set
-of actions that describe tasks that you can perform with this service. To
-specify multiple actions in a single statement, separate them with commas as
-follows: "Action": [ "ecr:action1", "ecr:action2" You can specify
-multiple actions using wildcards (*). For example, to specify all
-actions that begin with the word Describe, include the following action:
-"Action": "ecr:Describe*" To see a list of Amazon ECR actions, see
-Actions, Resources, and Condition Keys for Amazon Elastic Container
-Registry in the IAM User Guide.
-
-Resources
-
-The Resource element specifies the object or objects to which the action
-applies. Statements must include either a Resource or a NotResource element.
-You specify a resource using an ARN or using the wildcard (*) to
-indicate that the statement applies to all resources.
-
-An Amazon ECR repository resource has the following ARN:
-arn:${Partition}:ecr:${Region}:${Account}:repository/${Repository-name}
-For more information about the format of ARNs, see Amazon Resource Names
-(ARNs) and AWS Service Namespaces.
-For example, to specify the my-repo repository in the us-east-1 Region in
-your statement, use the following ARN:
-"Resource": "arn:aws:ecr:us-east-1:123456789012:repository/my-repo"
-To specify all repositories that belong to a specific account, use the
-wildcard (*): "Resource":
-"arn:aws:ecr:us-east-1:123456789012:repository/*"
-To specify multiple resources in a single statement, separate the ARNs with
-commas. "Resource": [ "resource1", "resource2"
-To see a list of Amazon ECR resource types and their ARNs, see Resources
-Defined by Amazon Elastic Container Registry in the IAM User Guide. To learn
-with which actions you can specify the ARN of each resource, see Actions
-Defined by Amazon Elastic Container Registry.
-
-Condition Keys
-
-The Condition element (or Condition block) lets you specify conditions in
-which a statement is in effect. The Condition element is optional. You can
-build conditional expressions that use condition operators, such as equals or
-less than, to match the condition in the policy with values in the request.
-If you specify multiple Condition elements in a statement, or multiple keys
-in a single Condition element, AWS evaluates them using a logical AND
-operation. If you specify multiple values for a single condition key, AWS
-evaluates the condition using a logical OR operation. All of the conditions
-must be met before the statement's permissions are granted.
-You can also use placeholder variables when you specify conditions. For
-example, you can grant an IAM user permission to access a resource only if it
-is tagged with their IAM user name. For more information, see IAM Policy
-Elements: Variables and Tags in the IAM User Guide.
-Amazon ECR defines its own set of condition keys and also supports using some global
-condition keys. To see all AWS global condition keys, see AWS Global Condition Context
-Keys in the IAM User Guide.
-Most Amazon ECR actions support the aws:ResourceTag and ecr:ResourceTag
-condition keys. For more information, see Using Tag-Based Access Control. To
-see a list of Amazon ECR condition keys, see Condition Keys Defined by Amazon
-Elastic Container Registry in the IAM User Guide. To learn with which actions
-and resources you can use a condition key, see Actions Defined by Amazon
-Elastic Container Registry.
- Is it the case that access to the container image registry is restricted?
-
-
-
- Audit:
-
-For Amazon EKS clusters with Secrets Encryption enabled, look for
-'encryptionConfig' configuration when you run:
-aws eks describe-cluster --name="cluster-name"
-
-Remediation:
-
-Enable 'Secrets Encryption' during Amazon EKS cluster creation as
-described in the links within the 'References' section.
-
-References:
-
- https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html
- https://eksworkshop.com/beginner/191_secrets/
-
- Is it the case that kubernetes secrets are encrypted in etcd?
-
-
-
- To check the group ownership of /etc/kubernetes/kubelet/kubelet-config.json,
-run the command:
-$ ls -lL /etc/kubernetes/kubelet/kubelet-config.json
-If properly configured, the output should indicate the following group-owner:
-root
- Is it the case that /etc/kubernetes/kubelet/kubelet-config.json does not have a group owner of root?
-
-
-
- To check the group ownership of /var/lib/kubelet/kubeconfig,
-run the command:
-$ ls -lL /var/lib/kubelet/kubeconfig
-If properly configured, the output should indicate the following group-owner:
-root
- Is it the case that /var/lib/kubelet/kubeconfig does not have a group owner of root?
-
-
-
- To check the ownership of /etc/kubernetes/kubelet/kubelet-config.json,
-run the command:
-$ ls -lL /etc/kubernetes/kubelet/kubelet-config.json
-If properly configured, the output should indicate the following owner:
-root
- Is it the case that /etc/kubernetes/kubelet/kubelet-config.json does not have an owner of root?
-
-
-
- To check the ownership of /var/lib/kubelet/kubeconfig,
-run the command:
-$ ls -lL /var/lib/kubelet/kubeconfig
-If properly configured, the output should indicate the following owner:
-root
- Is it the case that /var/lib/kubelet/kubeconfig does not have an owner of root?
-
-
-
- To check the permissions of /etc/kubernetes/kubelet/kubelet-config.json,
-run the command:
-$ ls -l /etc/kubernetes/kubelet/kubelet-config.json
-If properly configured, the output should indicate the following permissions:
--rw-r--r--
- Is it the case that /etc/kubernetes/kubelet/kubelet-config.json does not have unix mode -rw-r--r--?
-
-
-
- To check the permissions of /var/lib/kubelet/kubeconfig,
-run the command:
-$ ls -l /var/lib/kubelet/kubeconfig
-If properly configured, the output should indicate the following permissions:
--rw-r--r--
- Is it the case that /var/lib/kubelet/kubeconfig does not have unix mode -rw-r--r--?
-
-
-
-
-
-
-
-
- combine_ovals.py from SCAP Security Guide
- ssg: [0, 1, 64], python: 3.10.6
- 5.11
- 2022-08-11T18:55:39
-
-
-
-
- Alibaba Cloud Linux 2
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Alibaba Cloud Linux 2
-
-
-
-
-
-
-
-
- Alibaba Cloud Linux 3
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Alibaba Cloud Linux 3
-
-
-
-
-
-
-
-
- CentOS 7
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- CentOS 7
-
-
-
-
-
-
-
-
- CentOS 8
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- CentOS 8
-
-
-
-
-
-
-
-
-
- CentOS Stream 9
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- CentOS Stream 9
-
-
-
-
-
-
-
-
-
- Debian
-
- Amazon Elastic Kubernetes Service
-
- The operating system installed is a Debian System
-
-
-
-
-
-
-
-
- Debian Linux 10
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Debian 10
-
-
-
-
-
-
-
-
- Debian Linux 11
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Debian 11
-
-
-
-
-
-
-
-
- Debian 9
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Debian 9
-
-
-
-
-
-
-
-
- Installed operating system is Fedora
-
- Amazon Elastic Kubernetes Service
-
-
-
-
-
- The operating system installed on the system is Fedora
-
-
-
-
-
-
-
-
-
- Oracle Linux 7
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Oracle Linux 7
-
-
-
-
-
-
-
-
-
-
- Oracle Linux 8
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Oracle Linux 8
-
-
-
-
-
-
-
-
-
-
- Oracle Linux 9
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Oracle Linux 9
-
-
-
-
-
-
-
-
-
-
- openSUSE
-
- Amazon Elastic Kubernetes Service
-
- The operating system installed on the system is openSUSE.
-
-
-
-
-
-
-
-
- openSUSE Leap 15
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is openSUSE Leap 15.
-
-
-
-
-
-
-
-
- openSUSE Leap 42
-
- Amazon Elastic Kubernetes Service
-
-
-
-
- The operating system installed on the system is openSUSE Leap 42.
-
-
-
-
-
-
-
-
- Installed operating system is part of the Unix family
-
- Amazon Elastic Kubernetes Service
-
- The operating system installed on the system is part of the Unix OS family
-
-
-
-
-
-
-
- Red Hat Enterprise Linux CoreOS
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Red Hat Enterprise Linux CoreOS release 4
-
-
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 7
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Red Hat Enterprise Linux 7
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Red Hat Enterprise Linux 8
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.0
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.0
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.1
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.1
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.2
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.2
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.3
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.3
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.4
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.4
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.5
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.5
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.6
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.6
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.7
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.7
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.8
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.8
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.9
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.9
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 8.10
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Red Hat Enterprise Linux 8.10
-
-
-
-
-
-
-
- Red Hat Enterprise Linux 9
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Red Hat Enterprise Linux 9
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Red Hat Virtualization 4
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Red Hat Virtualization Host 4.4+ or Red Hat Enterprise Host.
-
-
-
-
-
-
-
-
- Scientific Linux 7
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is
- Scientific Linux 7
-
-
-
-
-
-
-
-
- SUSE Linux Enterprise 12
-
- Amazon Elastic Kubernetes Service
-
-
-
- The operating system installed on the system is
- SUSE Linux Enterprise 12.
-
-
-
-
-
-
-
-
-
-
-
-
- SUSE Linux Enterprise 15
-
- Amazon Elastic Kubernetes Service
-
-
-
- The operating system installed on the system is
- SUSE Linux Enterprise 15.
-
-
-
-
-
-
-
-
-
-
-
-
- Ubuntu
-
- Amazon Elastic Kubernetes Service
-
- The operating system installed is an Ubuntu System
-
-
-
-
-
-
-
-
-
- Ubuntu 1604
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Ubuntu 1604
-
-
-
-
-
-
-
-
- Ubuntu 1804
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Ubuntu 1804
-
-
-
-
-
-
-
-
- Ubuntu 2004
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is Ubuntu 2004
-
-
-
-
-
-
-
-
- UnionTech OS Server 20
-
- Amazon Elastic Kubernetes Service
-
-
- The operating system installed on the system is UnionTech OS Server 20
-
-
-
-
-
-
-
-
- Amazon Elastic Kubernetes Service
-
- Amazon Elastic Kubernetes Service
-
-
- The application installed installed on the system is EKS.
-
-
-
-
-
-
-
-
- Amazon Elastic Kubernetes Service 1.21
-
- Amazon Elastic Kubernetes Service
-
-
- The application installed installed on the system is Amazon Elastic Kubernetes Service 1.21.
-
-
-
-
-
-
-
-
- Amazon Elastic Kubernetes Service Node
-
- Amazon Elastic Kubernetes Service
-
-
- The application installed installed on the system is EKS 4.
-
-
-
-
-
-
-
- Red Hat Virtualization 4
-
- Amazon Elastic Kubernetes Service
-
-
- The application installed installed on the system is
- Red Hat Virtualization 4.
-
-
-
-
-
-
-
-
- Package audit is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package audit is installed.
-
-
-
-
-
-
-
-
- Package chrony is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package chrony is installed.
-
-
-
-
-
-
-
-
- Package gdm is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package gdm is installed.
-
-
-
-
-
-
-
-
- Package grub2 is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package grub2-common is installed.
-
-
-
-
-
-
-
-
-
-
-
-
- Package libuser is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package libuser is installed.
-
-
-
-
-
-
-
-
- Package providing /etc/login.defs is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package providing /etc/login.defs and is installed.
-
-
-
-
-
-
-
-
- Package net-snmp is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package net-snmp is installed.
-
-
-
-
-
-
-
-
- Check if the system doesn't act as an oVirt host or manager
-
- Amazon Elastic Kubernetes Service
-
- Check if the system has neither ovirt-host nor ovirt-engine installed.
-
-
-
-
-
-
-
- Package nss-pam-ldapd is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package nss-pam-ldapd is installed.
-
-
-
-
-
-
-
-
- Package ntp is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package ntp is installed.
-
-
-
-
-
-
-
-
- Check if the system acts as an oVirt host or manager
-
- Amazon Elastic Kubernetes Service
-
- Check if the system has ovirt-host or ovirt-engine installed
-
-
-
-
-
-
-
-
-
- Package pam is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package pam is installed.
-
-
-
-
-
-
-
-
- Package polkit is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package polkit is installed.
-
-
-
-
-
-
-
-
- Package postfix is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package postfix is installed.
-
-
-
-
-
-
-
-
- Package sssd-common is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package sssd-common is installed.
-
-
-
-
-
-
-
-
- Package sudo is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package sudo is installed.
-
-
-
-
-
-
-
-
- Package systemd is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package systemd is installed.
-
-
-
-
-
-
-
-
- Package tftp-server is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package tftp-server is installed.
-
-
-
-
-
-
-
-
- Package tmux is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package tmux is installed.
-
-
-
-
-
-
-
-
- Package usbguard is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package usbguard is installed.
-
-
-
-
-
-
-
-
- WiFi interface is present
-
- Amazon Elastic Kubernetes Service
-
- Checks if any wifi interface is present.
-
-
-
-
-
-
-
-
- Package yum is installed
-
- Amazon Elastic Kubernetes Service
-
- Checks if package yum is installed.
-
-
-
-
-
-
-
-
- System uses zIPL
-
- Amazon Elastic Kubernetes Service
-
- Checks if system uses zIPL bootloader.
-
-
-
-
-
-
-
-
- Check if the scan target is a container
-
- Amazon Elastic Kubernetes Service
-
- Check for presence of files characterizing container filesystems.
-
-
-
-
-
-
-
-
-
- Check if the scan target is a machine
-
- Amazon Elastic Kubernetes Service
-
- Check for absence of files characterizing container filesystems.
-
-
-
-
-
-
-
-
- Kerberos server is older than 1.17-18
-
- Amazon Elastic Kubernetes Service
-
-
- Check if version of Kerberos server is lesser than 1.17-18
-
-
-
-
-
-
-
-
- Kerberos workstation is older than 1.17-18
-
- Amazon Elastic Kubernetes Service
-
-
- Check if version of Kerberos workstation is lesser than 1.17-18
-
-
-
-
-
-
-
-
- Test that the architecture is aarch64
-
- Amazon Elastic Kubernetes Service
-
- Check that architecture of kernel in /proc/sys/kernel/osrelease is aarch64
-
-
-
-
-
-
-
- Test for different architecture than aarch64
-
- Amazon Elastic Kubernetes Service
-
- Check that architecture of kernel in /proc/sys/kernel/osrelease is not aarch64
-
-
-
-
-
-
-
- Test for different architecture than s390x
-
- Amazon Elastic Kubernetes Service
-
- Check that architecture of kernel in /proc/sys/kernel/osrelease is not s390x
-
-
-
-
-
-
-
- Test that the architecture is ppc64le
-
- Amazon Elastic Kubernetes Service
-
- Check that architecture of kernel in /proc/sys/kernel/osrelease is ppc64le
-
-
-
-
-
-
-
- Test that the architecture is s390x
-
- Amazon Elastic Kubernetes Service
-
- Check that architecture of kernel in /proc/sys/kernel/osrelease is s390x
-
-
-
-
-
-
-
- SSSD is configured to use LDAP
-
- Amazon Elastic Kubernetes Service
-
- Identification provider is not set to ad within /etc/sssd/sssd.conf
-
-
-
-
-
-
-
-
- Non-UEFI system boot mode check
-
- Amazon Elastic Kubernetes Service
-
- Check if System boot mode is non-UEFI.
-
-
-
-
-
-
-
-
- UEFI system boot mode check
-
- Amazon Elastic Kubernetes Service
-
- Check if system boot mode is UEFI.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- alinux-release
-
-
- alinux-release
-
-
- centos-release
-
-
- /etc/os-release
- ^ID="(\w+)"$
- 1
-
-
- /etc/os-release
- ^VERSION_ID="(\d)"$
- 1
-
-
- /etc/os-release
- ^ID="(\w+)"$
- 1
-
-
- /etc/os-release
- ^VERSION_ID="(\d)"$
- 1
-
-
- /etc/debian_version
-
-
- /etc/debian_version
- ^10.[0-9]+$
- 1
-
-
- /etc/debian_version
- ^11.[0-9]+$
- 1
-
-
- /etc/debian_version
- ^9.[0-9]+$
- 1
-
-
- fedora-release.*
-
-
- /etc/system-release-cpe
- ^cpe:\/o:fedoraproject:fedora:[\d]+$
- 1
-
-
- oraclelinux-release
-
-
- oraclelinux-release
-
-
- oraclelinux-release
-
-
- openSUSE-release
-
-
- openSUSE-release
-
-
- openSUSE-release
-
-
-
- /etc/os-release
- ^ID="(\w+)"$
- 1
-
-
- /etc/os-release
- ^VERSION_ID="(\d)\.\d+"$
- 1
-
-
-
- redhat-release-client
-
-
- redhat-release-workstation
-
-
- redhat-release-server
-
-
- redhat-release-computenode
-
-
- /etc/redhat-release
- ^Red Hat Enterprise Linux release (\d)\.\d+$
- 1
-
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- redhat-release
-
-
- /etc/redhat-release
- ^Red Hat Enterprise Linux release (\d)\.\d+$
- 1
-
-
-
- redhat-release
-
-
- /etc/redhat-release
- ^Red Hat Enterprise Linux release (\d)\.\d+$
- 1
-
-
- redhat-release-virtualization-host
-
-
- sl-release
-
-
-
- sled-release
-
-
- sles-release
-
-
- SLES_SAP-release
-
-
-
- sled-release
-
-
- sles-release
-
-
- SLES_SAP-release
-
-
- /etc/lsb-release
-
-
- /etc/lsb-release
- ^DISTRIB_ID=Ubuntu$
- 1
-
-
- /etc/lsb-release
- ^DISTRIB_CODENAME=xenial$
- 1
-
-
- /etc/lsb-release
- ^DISTRIB_CODENAME=bionic$
- 1
-
-
- /etc/lsb-release
- ^DISTRIB_CODENAME=focal$
- 1
-
-
- uos-release
-
-
-
-
-
-
- .gitVersion
-
-
- /var/lib/kubelet/kubeconfig
-
-
- rhvm-appliance
-
-
- audit
-
-
- chrony
-
-
- gdm
-
-
- grub2-common
-
-
- /sys/firmware/opal
-
-
- libuser
-
-
- shadow-utils
-
-
- net-snmp
-
-
- nss-pam-ldapd
-
-
- ntp
-
-
- ovirt-host
-
-
- ovirt-engine
-
-
- pam
-
-
- polkit
-
-
- postfix
-
-
- sssd-common
-
-
- sudo
-
-
- systemd
-
-
- tftp-server
-
-
- tmux
-
-
- usbguard
-
-
- /proc/net/wireless
-
-
- yum
-
-
- s390utils-base
-
-
- /.dockerenv
-
-
- /run/.containerenv
-
-
- krb5-server
-
-
- krb5-workstation
-
-
- /proc/sys/kernel/osrelease
- ^.*\.(.*)$
- 1
-
-
- /proc/sys/kernel/osrelease
- ^.*\.(.*)$
- 1
-
-
- /proc/sys/kernel/osrelease
- ^.*\.(.*)$
- 1
-
-
- /etc/sssd/sssd.conf
- ^[\s]*\[domain\/[^]]*]([^\n\[\]]*\n+)+?[\s]*id_provider[ \t]*=[ \t]*((?i)ad)[ \t]*$
- 1
-
-
- /sys/firmware/efi
-
-
-
-
-
-
- ^2.*$
-
-
- ^3.*$
-
-
- ^7.*$
-
-
- centos
-
-
- 8
-
-
- centos
-
-
- 9
-
-
- ^7.*$
-
-
- ^8.*$
-
-
- ^9.*$
-
-
- openSUSE-release
-
-
- ^15.*$
-
-
- ^42.*$
-
-
- unix
-
-
- rhcos
-
-
- 4
-
-
- unix
-
-
- ^7.*$
-
-
- ^7.*$
-
-
- ^7.*$
-
-
- ^7.*$
-
-
- 7
-
-
- unix
-
-
- ^8.*$
-
-
- ^8.0*$
-
-
- ^8.1*$
-
-
- ^8.2*$
-
-
- ^8.3*$
-
-
- ^8.4*$
-
-
- ^8.5*$
-
-
- ^8.6*$
-
-
- ^8.7*$
-
-
- ^8.8*$
-
-
- ^8.9*$
-
-
- ^8.10*$
-
-
- 8
-
-
- unix
-
-
- ^9.*$
-
-
- 9
-
-
- 0:4.4
-
-
- ^7.*$
-
-
- unix
-
-
- ^12.*$
-
-
- ^12.*$
-
-
- ^12.*$
-
-
- unix
-
-
- ^15.*$
-
-
- ^15.*$
-
-
- ^15.*$
-
-
- ^20.*$
-
-
-
- ^.*-eks-.*$
-
-
-
-
- ^v1\.21\..*
-
-
-
- ^4.*$
-
-
- 0:1.17-18
-
-
- 0:1.17-18
-
-
- ^aarch64$
-
-
- ^ppc64le$
-
-
- ^s390x$
-
-
- ppc64le
-
-
-
-
- /kubernetes-api-resources/version
-
-
-
-
-
diff --git a/images/testcontent/kubelet_default/ssg-ocp4-ds.xml b/images/testcontent/kubelet_default/ssg-ocp4-ds.xml
index a87158690..a09ae7a75 100644
--- a/images/testcontent/kubelet_default/ssg-ocp4-ds.xml
+++ b/images/testcontent/kubelet_default/ssg-ocp4-ds.xml
@@ -1,6 +1,6 @@
-
-
+
+
@@ -9,7 +9,7 @@
-
+
@@ -22,7 +22,7 @@
-
+
System architecture is not S390X
@@ -130,9 +130,9 @@
-
-
- draft
+
+
+ draft
Guide to the Secure Configuration of Red Hat OpenShift Container Platform 4
This guide presents a catalog of security-relevant
configuration settings for Red Hat OpenShift Container Platform 4. It is a rendering of
@@ -193,9 +193,10 @@ respective companies.
-
-
+
+
+
@@ -207,10 +208,11 @@ respective companies.
-
+
+
@@ -246,6 +248,11 @@ respective companies.
+
+
+
+
+
@@ -292,7 +299,7 @@ respective companies.
- 0.1.64
+ 0.1.65
SCAP Security Guide Project
SCAP Security Guide Project
@@ -412,6 +419,7 @@ respective companies.
Joseph Lenox <joseph.lenox@collins.com>
Jan Lieskovsky <jlieskov@redhat.com>
Markus Linnala <Markus.Linnala@knowit.fi>
+ Flos Lonicerae <lonicerae@gmail.com>
Šimon Lukašík <slukasik@redhat.com>
Milan Lysonek <mlysonek@redhat.com>
Fredrik Lysén <fredrik@pipemore.se>
@@ -637,32 +645,30 @@ This profile is applicable to OpenShift versions 4.6 and greater.
-
-
-
+
+
+
+
+
+
-
-
-
-
+
+
-
-
-
-
-
-
-
+
+
+
+
@@ -756,11 +762,31 @@ This profile is applicable to OpenShift versions 4.6 and greater.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
@@ -781,21 +807,21 @@ This profile is applicable to OpenShift versions 4.6 and greater.
-
+
-
-
-
-
+
+
-
-
+
+
+
+
@@ -822,29 +848,29 @@ https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-work
-
+
+
+
+
+
-
-
-
-
+
+
-
-
-
-
-
-
+
+
+
+
@@ -970,34 +996,32 @@ consensus and release processes.
-
-
-
+
+
+
+
+
+
-
-
-
-
+
-
-
-
-
-
-
+
+
+
+
@@ -1113,7 +1137,7 @@ consensus and release processes.
-
+
@@ -1123,11 +1147,31 @@ consensus and release processes.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
@@ -1162,180 +1206,24 @@ consensus and release processes.
-
-
-
-
-
+
-
-
-
-
- NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level
- This compliance profile reflects the core set of Moderate-Impact Baseline
-configuration settings for deployment of Red Hat OpenShift Container
-Platform into U.S. Defense, Intelligence, and Civilian agencies.
-Development partners and sponsors include the U.S. National Institute
-of Standards and Technology (NIST), U.S. Department of Defense,
-the National Security Agency, and Red Hat.
-
-This baseline implements configuration requirements from the following
-sources:
-
-- NIST 800-53 control selections for Moderate-Impact systems (NIST 800-53)
-
-For any differing configuration requirements, e.g. password lengths, the stricter
-security setting was chosen. Security Requirement Traceability Guides (RTMs) and
-sample System Security Configuration Guides are provided via the
-scap-security-guide-docs package.
-
-This profile reflects U.S. Government consensus content and is developed through
-the ComplianceAsCode initiative, championed by the National
-Security Agency. Except for differences in formatting to accommodate
-publishing processes, this profile mirrors ComplianceAsCode
-content as minor divergences, such as bugfixes, work through the
-consensus and release processes.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level
+
+ NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level
This compliance profile reflects the core set of Moderate-Impact Baseline
configuration settings for deployment of Red Hat OpenShift Container
Platform into U.S. Defense, Intelligence, and Civilian agencies.
@@ -1359,164 +1247,6 @@ Security Agency. Except for differences in formatting to accommodate
publishing processes, this profile mirrors ComplianceAsCode
content as minor divergences, such as bugfixes, work through the
consensus and release processes.
- https://nvd.nist.gov/800-53/Rev4/impact/moderate
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the Red Hat OpenShift Container Platform - Node level
- This compliance profile reflects a set of security recommendations for
-the usage of Red Hat OpenShift Container Platform in critical
-infrastructure in the energy sector. This follows the recommendations
-coming from the following CIP standards:
-
-- CIP-002-5
-- CIP-003-8
-- CIP-004-6
-- CIP-005-6
-- CIP-007-3
-- CIP-007-6
-- CIP-009-6
- https://www.nerc.com/pa/Stand/AlignRep/One%20Stop%20Shop.xlsx
@@ -1611,34 +1341,32 @@ coming from the following CIP standards:
-
-
-
+
+
+
+
+
+
-
-
-
-
+
-
-
-
-
-
-
+
+
+
+
@@ -1648,21 +1376,32 @@ coming from the following CIP standards:
-
- North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the Red Hat OpenShift Container Platform - Platform level
- This compliance profile reflects a set of security recommendations for
-the usage of Red Hat OpenShift Container Platform in critical
-infrastructure in the energy sector. This follows the recommendations
-coming from the following CIP standards:
+
+ NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level
+ This compliance profile reflects the core set of Moderate-Impact Baseline
+configuration settings for deployment of Red Hat OpenShift Container
+Platform into U.S. Defense, Intelligence, and Civilian agencies.
+Development partners and sponsors include the U.S. National Institute
+of Standards and Technology (NIST), U.S. Department of Defense,
+the National Security Agency, and Red Hat.
-- CIP-002-5
-- CIP-003-8
-- CIP-004-6
-- CIP-005-6
-- CIP-007-3
-- CIP-007-6
-- CIP-009-6
- https://www.nerc.com/pa/Stand/AlignRep/One%20Stop%20Shop.xlsx
+This baseline implements configuration requirements from the following
+sources:
+
+- NIST 800-53 control selections for Moderate-Impact systems (NIST 800-53)
+
+For any differing configuration requirements, e.g. password lengths, the stricter
+security setting was chosen. Security Requirement Traceability Guides (RTMs) and
+sample System Security Configuration Guides are provided via the
+scap-security-guide-docs package.
+
+This profile reflects U.S. Government consensus content and is developed through
+the ComplianceAsCode initiative, championed by the National
+Security Agency. Except for differences in formatting to accommodate
+publishing processes, this profile mirrors ComplianceAsCode
+content as minor divergences, such as bugfixes, work through the
+consensus and release processes.
+ https://nvd.nist.gov/800-53/Rev4/impact/moderate
@@ -1741,7 +1480,7 @@ coming from the following CIP standards:
-
+
@@ -1750,11 +1489,31 @@ coming from the following CIP standards:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
@@ -1789,27 +1548,41 @@ coming from the following CIP standards:
-
-
-
-
-
+
+
+
+
+
-
- PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4
- Ensures PCI-DSS v3.2.1 security configuration settings are applied.
- https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf
+
+ North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the Red Hat OpenShift Container Platform - Node level
+ This compliance profile reflects a set of security recommendations for
+the usage of Red Hat OpenShift Container Platform in critical
+infrastructure in the energy sector. This follows the recommendations
+coming from the following CIP standards:
+
+- CIP-002-5
+- CIP-003-8
+- CIP-004-6
+- CIP-005-6
+- CIP-007-3
+- CIP-007-6
+- CIP-009-6
+ https://www.nerc.com/pa/Stand/AlignRep/One%20Stop%20Shop.xlsx
+
+
+
@@ -1879,6 +1652,7 @@ coming from the following CIP standards:
+
@@ -1899,45 +1673,56 @@ coming from the following CIP standards:
-
-
-
+
+
+
+
+
+
+
-
-
-
-
+
-
-
-
-
-
-
-
+
+
+
+
+
+
-
- PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4
- Ensures PCI-DSS v3.2.1 security configuration settings are applied.
- https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf
+
+ North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the Red Hat OpenShift Container Platform - Platform level
+ This compliance profile reflects a set of security recommendations for
+the usage of Red Hat OpenShift Container Platform in critical
+infrastructure in the energy sector. This follows the recommendations
+coming from the following CIP standards:
+
+- CIP-002-5
+- CIP-003-8
+- CIP-004-6
+- CIP-005-6
+- CIP-007-3
+- CIP-007-6
+- CIP-009-6
+ https://www.nerc.com/pa/Stand/AlignRep/One%20Stop%20Shop.xlsx
@@ -1984,8 +1769,15 @@ coming from the following CIP standards:
+
+
+
+
+
+
+
@@ -1995,9 +1787,9 @@ coming from the following CIP standards:
+
-
@@ -2009,31 +1801,62 @@ coming from the following CIP standards:
+
+
+
+
+
+
+
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
+
-
+
+
+
+
+
+
@@ -2046,34 +1869,27 @@ coming from the following CIP standards:
-
-
-
-
-
-
-
+
-
+
+
+
+
+
+
-
- [DRAFT] DISA STIG for Red Hat OpenShift Container Platform 4 - Node level
- This is a draft profile for experimental purposes. It is not based on the
-DISA STIG for OCP4, because one was not available at the time yet. This
-profile contains configuration checks that align to the DISA STIG for
-Red Hat OpenShift Container Platform 4.
- https://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_Container_Platform_V1R3_SRG.zip
+
+ PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4
+ Ensures PCI-DSS v3.2.1 security configuration settings are applied.
+ https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf
-
-
-
@@ -2131,6 +1947,9 @@ Red Hat OpenShift Container Platform 4.
+
+
+
@@ -2140,7 +1959,6 @@ Red Hat OpenShift Container Platform 4.
-
@@ -2154,60 +1972,50 @@ Red Hat OpenShift Container Platform 4.
+
+
+
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
-
-
-
-
+
+
-
-
-
-
-
-
+
+
+
+
-
-
- [DRAFT] DISA STIG for Red Hat OpenShift Container Platform 4 - Platform level
- This is a draft profile for experimental purposes. It is not based on the
-DISA STIG for OCP4, because one was not available at the time yet. This
-profile contains configuration checks that align to the DISA STIG for
-Red Hat OpenShift Container Platform 4.
- https://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_Container_Platform_V1R3_SRG.zip
+
+ PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4
+ Ensures PCI-DSS v3.2.1 security configuration settings are applied.
+ https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf
@@ -2254,12 +2062,8 @@ Red Hat OpenShift Container Platform 4.
-
-
-
-
@@ -2271,6 +2075,7 @@ Red Hat OpenShift Container Platform 4.
+
@@ -2279,9 +2084,9 @@ Red Hat OpenShift Container Platform 4.
+
-
@@ -2289,27 +2094,44 @@ Red Hat OpenShift Container Platform 4.
+
+
+
+
+
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
-
+
+
-
-
@@ -2322,20 +2144,22 @@ Red Hat OpenShift Container Platform 4.
-
+
+
-
-
-
-
+
-
+
+
+
+
+
Introduction
@@ -2666,19 +2490,27 @@ AWS resources will be able, through IAM policies, to use the KMS key to eventual
-
- Ensure that the cluster was installed with FIPS mode enabled
+
+ Ensure that FIPS mode is enabled on all cluster nodes
OpenShift has an installation-time flag that can enable FIPS mode
for the cluster. The flag fips: true must be enabled
at install time in the install-config.yaml file.
This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/machineconfiguration.openshift.io/v1/machineconfigs/99-master-fips API endpoint to the local /apis/machineconfiguration.openshift.io/v1/machineconfigs/99-master-fips file.
+Therefore, you need to use a tool that can query the OCP API, retrieve the following:
+/apis/machineconfiguration.openshift.io/v1/machineconfigs
+ API endpoint, filter with with the jq utility using the following filter
+ [.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)
+ and persist it to the local
+ /apis/machineconfiguration.openshift.io/v1/machineconfigs#ab7e02a1c3f44ae48f843ce3dee7b948d624d2f702b9428760efbfd4653847ba
+ file.
+
CIP-003-8 R4.2
CIP-007-3 R5.1
CIP-007-3 R7.1
AC-17(2)
SC-13
IA-7
+ Req-3.4.1
SRG-APP-000014-CTR-000035
SRG-APP-000014-CTR-000040
SRG-APP-000416-CTR-001015
@@ -2690,34 +2522,6 @@ Therefore, you need to use a tool that can query the OCP API, retrieve the Use of weak or untested encryption algorithms undermines the purposes of utilizing encryption to
protect data. The system must implement cryptographic modules adhering to the higher
standards approved by the federal government since this provides assurance they have been tested
-and validated.
- CCE-84214-6
-
-
-
-
-
-
-
-
-
- Ensure that FIPS mode is enabled on all cluster nodes
- OpenShift has an installation-time flag that can enable FIPS mode
-for the cluster. The flag fips: true must be enabled
-at install time in the install-config.yaml file.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/machineconfiguration.openshift.io/v1/machineconfigs
- API endpoint, filter with with the jq utility using the following filter
- [.items[] | select(.metadata.name | test("^[0-9]{2}-worker$|^[0-9]{2}-master$"))]|map(.spec.fips == true)
- and persist it to the local
- /apis/machineconfiguration.openshift.io/v1/machineconfigs#191c7889a801949fcc07c8f067ca719c614388ea53f4b96b7148c57799e423b3
- file.
-
- Req-3.4.1
- Use of weak or untested encryption algorithms undermines the purposes of utilizing encryption to
-protect data. The system must implement cryptographic modules adhering to the higher
-standards approved by the federal government since this provides assurance they have been tested
and validated.
CCE-85860-5
@@ -2765,11 +2569,11 @@ at installation. The object luks must be present at install
prepared with the install-config.yaml file.
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/machineconfiguration.openshift.io/v1/machineconfigs
+/apis/machineconfiguration.openshift.io/v1/machineconfigs
API endpoint, filter with with the jq utility using the following filter
- [.items[] | select(.metadata.name | test("^[0-9]{2}-worker$|^[0-9]{2}-master$"))]|map(.spec.config.storage.luks[0].clevis != null)
+ [.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)
and persist it to the local
- /apis/machineconfiguration.openshift.io/v1/machineconfigs#136fe907b51dc9ea5011707799731b533561dab4b043f086f36c0b5c9c288414
+ /apis/machineconfiguration.openshift.io/v1/machineconfigs#9fab597988075d76a1c081cdc533f05623251a854b9936a08ae52cca5fc5a311
file.
Req-3.4.1
@@ -2797,17 +2601,17 @@ disk encryption can be used as well. [1][2]
[2] https://docs.openshift.com/container-platform/latest/machine_management/creating_machinesets/creating-machineset-gcp.html#machineset-enabling-customer-managed-encryption_creating-machineset-gcp
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/machineconfiguration.openshift.io/v1/machineconfigs
+/apis/machineconfiguration.openshift.io/v1/machineconfigs
API endpoint, filter with with the jq utility using the following filter
- [.items[] | select(.metadata.name | test("^[0-9]{2}-worker$|^[0-9]{2}-master$"))]|map(.spec.fips == true)
+ [.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)
and persist it to the local
- /apis/machineconfiguration.openshift.io/v1/machineconfigs#191c7889a801949fcc07c8f067ca719c614388ea53f4b96b7148c57799e423b3
+ /apis/machineconfiguration.openshift.io/v1/machineconfigs#ab7e02a1c3f44ae48f843ce3dee7b948d624d2f702b9428760efbfd4653847ba
file.
- /apis/machineconfiguration.openshift.io/v1/machineconfigs
+ /apis/machineconfiguration.openshift.io/v1/machineconfigs
API endpoint, filter with with the jq utility using the following filter
- [.items[] | select(.metadata.name | test("^[0-9]{2}-worker$|^[0-9]{2}-master$"))]|map(.spec.config.storage.luks[0].clevis != null)
+ [.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)
and persist it to the local
- /apis/machineconfiguration.openshift.io/v1/machineconfigs#136fe907b51dc9ea5011707799731b533561dab4b043f086f36c0b5c9c288414
+ /apis/machineconfiguration.openshift.io/v1/machineconfigs#9fab597988075d76a1c081cdc533f05623251a854b9936a08ae52cca5fc5a311
file.
/apis/machine.openshift.io/v1beta1/machinesets?limit=500
API endpoint, filter with with the jq utility using the following filter
@@ -3528,7 +3332,7 @@ In a large multi-tenant cluster, there might be a small percentage of
misbehaving tenants which could have a significant impact on the
performance of the cluster overall. It is recommended to limit the rate
of events that the API Server will accept.
-
+
CCE-86390-2
@@ -3864,6 +3668,7 @@ the internal service. The value is set by the bindAddress argument under the se
parameter.
CCE-83646-0
+
@@ -4091,8 +3896,8 @@ requires the API Server to identify itself to the etcd server using
a SSL Certificate Authority file.
CCE-84216-1
-
+
@@ -4437,7 +4242,7 @@ Therefore, you need to use a tool that can query the OCP API, retrieve the follo
HTTPS endpoints. Requests from the API Server are treated anonymously.
Configuring certificate-based kubelet authentication ensures that the
API Server authenticates itself to kubelets when submitting requests.
-
+
CCE-84080-1
@@ -4526,7 +4331,7 @@ Therefore, you need to use a tool that can query the OCP API, retrieve the follo
HTTPS endpoints. Requests from the API Server are treated anonymously.
Configuring certificate-based kubelet authentication ensures that the
API Server authenticates itself to kubelets when submitting requests.
-
+
CCE-83591-8
@@ -5163,8 +4968,8 @@ old log files to keep as 10, there would be approximately 1 GB of log data
available for use in analysis.
CCE-83687-4
-
+
@@ -5179,6 +4984,12 @@ to multiple authentication services. Some of these authentication
methods by not be secure or common methodologies, or they may not
be secure by default. This section introduces mechanisms for
configuring authentication systems Kubernetes.
+
+ OAuth Clients Token Inactivity Timeout
+ Enter OAuth Clients Token Inactivity Timeout in Seconds
+ 600
+ 600
+
OAuth Token Inactivity Timeout
Enter OAuth Token Inactivity Timeout
@@ -5420,6 +5231,7 @@ spec:
+
@@ -5587,7 +5399,7 @@ of opportunity for unauthorized personnel to take control of a session
that has been left unattended.
CCE-84178-3
-
+
@@ -7388,7 +7200,7 @@ The kubelet takes a set of PodSpecs that are provided through various
mechanisms and ensures that the containers described in those PodSpecs are
running and healthy. The kubelet doesn’t manage containers which were not
created by Kubernetes.
-
+
Configure Kubelet Event Limit
Maximum event creations per second.
5
@@ -7493,18 +7305,22 @@ created by Kubernetes.
Configure Kubelet use of the Strong Cryptographic Ciphers
Cryptographic Ciphers Available for Kubelet, seperated by comma
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+ TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
Configure Kubelet use of the Strong Cryptographic Ciphers
Cryptographic Ciphers Available for Kubelet
- ^(TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384|TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384|TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256|TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256)$
+ ^(TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384|TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384|TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256|TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256|TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256|TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256)$
+
+
+ Configure which node to scan based on role
+ Configure which node to scan based on role
+ master
-
+
Configure which node to scan based on role
Configure which node to scan based on role
worker
- master
Streaming Connection Timeout Options
@@ -7514,13 +7330,13 @@ and (h) for hours.
5m0s
10m0s
30m0s
- 1h
- 2h
- 4h
- 6h
- 8h
+ 1h0m0s
+ 2h0m0s
+ 4h0m0s
+ 6h0m0s
+ 8h0m0s
-
+
Disable Anonymous Authentication to the Kubelet
By default, anonymous access to the Kubelet server is enabled. This
configuration check ensures that anonymous requests to the Kubelet
@@ -7534,6 +7350,8 @@ authentication:
enabled: false
...
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -7548,16 +7366,131 @@ authentication methods are treated as anonymous requests. These
requests are then served by the Kubelet server. OpenShift Operators should
rely on authentication to authorize access and disallow anonymous
requests.
-
+
CCE-83815-1
+
+
+
+
+
+
+
+
+
+
+ Disable Anonymous Authentication to the Kubelet
+ By default, anonymous access to the Kubelet server is enabled. This
+configuration check ensures that anonymous requests to the Kubelet
+server are disabled. Edit the Kubelet server configuration file
+/etc/kubernetes/kubelet.conf on the kubelet node(s)
+and set the below parameter:
+
+authentication:
+ ...
+ anonymous:
+ enabled: false
+ ...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.1
+ When enabled, requests that are not rejected by other configured
+authentication methods are treated as anonymous requests. These
+requests are then served by the Kubelet server. OpenShift Operators should
+rely on authentication to authorize access and disallow anonymous
+requests.
+
+
+
+
+
+
+
+
+
+ Disable Anonymous Authentication to the Kubelet
+ By default, anonymous access to the Kubelet server is enabled. This
+configuration check ensures that anonymous requests to the Kubelet
+server are disabled. Edit the Kubelet server configuration file
+/etc/kubernetes/kubelet.conf on the kubelet node(s)
+and set the below parameter:
+
+authentication:
+ ...
+ anonymous:
+ enabled: false
+ ...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.1
+ When enabled, requests that are not rejected by other configured
+authentication methods are treated as anonymous requests. These
+requests are then served by the Kubelet server. OpenShift Operators should
+rely on authentication to authorize access and disallow anonymous
+requests.
+
+
+
+
+
+
+
+
+
+
+
+ Disable Anonymous Authentication to the Kubelet
+ By default, anonymous access to the Kubelet server is enabled. This
+configuration check ensures that anonymous requests to the Kubelet
+server are disabled. Edit the Kubelet server configuration file
+/etc/kubernetes/kubelet.conf on the kubelet node(s)
+and set the below parameter:
+
+authentication:
+ ...
+ anonymous:
+ enabled: false
+ ...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.1
+ When enabled, requests that are not rejected by other configured
+authentication methods are treated as anonymous requests. These
+requests are then served by the Kubelet server. OpenShift Operators should
+rely on authentication to authorize access and disallow anonymous
+requests.
+
+
+
+
-
+
Ensure authorization is set to Webhook
Unauthenticated/unauthorized users should have no access to OpenShift nodes.
The Kubelet should be set to only allow Webhook authorization.
@@ -7569,6 +7502,8 @@ authorization:
mode: Webhook
...
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -7580,15 +7515,231 @@ authorization:
4.2.2
Ensuring that the authorization is configured correctly helps enforce that
unauthenticated/unauthorized users have no access to OpenShift nodes.
-
+
CCE-83593-4
+
+
+
+
+
+
+
+
+
+
+ Ensure authorization is set to Webhook
+ Unauthenticated/unauthorized users should have no access to OpenShift nodes.
+The Kubelet should be set to only allow Webhook authorization.
+To ensure that the Kubelet requires authorization,
+validate that authorization is configured to Webhook
+in /etc/kubernetes/kubelet.conf:
+
+authorization:
+ mode: Webhook
+ ...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.2
+ Ensuring that the authorization is configured correctly helps enforce that
+unauthenticated/unauthorized users have no access to OpenShift nodes.
+
+
+
+
+
+
+
+
+
+ Ensure authorization is set to Webhook
+ Unauthenticated/unauthorized users should have no access to OpenShift nodes.
+The Kubelet should be set to only allow Webhook authorization.
+To ensure that the Kubelet requires authorization,
+validate that authorization is configured to Webhook
+in /etc/kubernetes/kubelet.conf:
+
+authorization:
+ mode: Webhook
+ ...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.2
+ Ensuring that the authorization is configured correctly helps enforce that
+unauthenticated/unauthorized users have no access to OpenShift nodes.
+
+
+
+
+
+
+
+
+
+
+
+ Ensure authorization is set to Webhook
+ Unauthenticated/unauthorized users should have no access to OpenShift nodes.
+The Kubelet should be set to only allow Webhook authorization.
+To ensure that the Kubelet requires authorization,
+validate that authorization is configured to Webhook
+in /etc/kubernetes/kubelet.conf:
+
+authorization:
+ mode: Webhook
+ ...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.2
+ Ensuring that the authorization is configured correctly helps enforce that
+unauthenticated/unauthorized users have no access to OpenShift nodes.
+
+
+
+
+
+ kubelet - Configure the Client CA Certificate
+ By default, the kubelet is not configured with a CA certificate which
+can subject the kubelet to man-in-the-middle attacks.
+
+To configure a client CA certificate, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+authentication:
+...
+ x509:
+ clientCAFile: /etc/kubernetes/kubelet-ca.crt
+...
+
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.3
+ Not having a CA certificate for the kubelet will subject the kubelet to possible
+man-in-the-middle attacks especially on unsafe or untrusted networks.
+Certificate validation for the kubelet allows the API server to validate
+the kubelet's identity.
+
+ CCE-83724-5
+
+
+
+
+
+
+
+
+
+
+
+ kubelet - Configure the Client CA Certificate
+ By default, the kubelet is not configured with a CA certificate which
+can subject the kubelet to man-in-the-middle attacks.
+
+To configure a client CA certificate, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+authentication:
+...
+ x509:
+ clientCAFile: /etc/kubernetes/kubelet-ca.crt
+...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.3
+ Not having a CA certificate for the kubelet will subject the kubelet to possible
+man-in-the-middle attacks especially on unsafe or untrusted networks.
+Certificate validation for the kubelet allows the API server to validate
+the kubelet's identity.
+
+
+
+
+
+
+
+
+
+ kubelet - Configure the Client CA Certificate
+ By default, the kubelet is not configured with a CA certificate which
+can subject the kubelet to man-in-the-middle attacks.
+
+To configure a client CA certificate, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+authentication:
+...
+ x509:
+ clientCAFile: /etc/kubernetes/kubelet-ca.crt
+...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.3
+ Not having a CA certificate for the kubelet will subject the kubelet to possible
+man-in-the-middle attacks especially on unsafe or untrusted networks.
+Certificate validation for the kubelet allows the API server to validate
+the kubelet's identity.
+
+
+
+
+
+
+
+
+
+
kubelet - Configure the Client CA Certificate
By default, the kubelet is not configured with a CA certificate which
@@ -7617,16 +7768,84 @@ authentication:
man-in-the-middle attacks especially on unsafe or untrusted networks.
Certificate validation for the kubelet allows the API server to validate
the kubelet's identity.
-
- CCE-83724-5
+
+
+
-
+
+ Kubelet - Ensure Event Creation Is Configured
+ Security relevant information should be captured. The eventRecordQPS
+Kubelet option can be used to limit the rate at which events are gathered.
+Setting this too low could result in relevant events not being logged,
+however the unlimited setting of 0 could result in a denial of service on
+the kubelet. Processing and storage systems should be scaled to handle the
+expected event load. To set the eventRecordQPS option for the kubelet,
+create a KubeletConfig option along these lines:
+
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ name: kubelet-config-$pool
+spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ eventRecordQPS:
+
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.9
+ It is important to capture all events and not restrict event creation.
+Events are an important source of security information and analytics that
+ensure that your environment is consistently monitored using the event
+data.
+ CCE-83576-9
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ eventRecordQPS: {{.var_event_record_qps}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ eventRecordQPS: {{.var_event_record_qps}}
+
+
+
+
+
+
+
+
+
+
+
+
+
Kubelet - Ensure Event Creation Is Configured
Security relevant information should be captured. The eventRecordQPS
Kubelet option can be used to limit the rate at which events are gathered.
@@ -7665,15 +7884,103 @@ Events are an important source of security information and analytics that
ensure that your environment is consistently monitored using the event
data.
- CCE-83576-9
- ---
+
+
+
+
+
+
+
+
+ Kubelet - Ensure Event Creation Is Configured
+ Security relevant information should be captured. The eventRecordQPS
+Kubelet option can be used to limit the rate at which events are gathered.
+Setting this too low could result in relevant events not being logged,
+however the unlimited setting of 0 could result in a denial of service on
+the kubelet. Processing and storage systems should be scaled to handle the
+expected event load. To set the eventRecordQPS option for the kubelet,
+create a KubeletConfig option along these lines:
+
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
+metadata:
+ name: kubelet-config-$pool
spec:
- kubeletConfig:
- eventRecordQPS: {{.var_event_record_qps}}
-
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ eventRecordQPS:
+
+ The MachineConfig Operator does not merge KubeletConfig
+objects, the last object is used instead. In case you need to
+set multiple options for kubelet, consider putting all the custom
+options into a single KubeletConfig object.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.9
+ It is important to capture all events and not restrict event creation.
+Events are an important source of security information and analytics that
+ensure that your environment is consistently monitored using the event
+data.
+
+
+
+
+
+
+
+
+
+
+
+ Kubelet - Ensure Event Creation Is Configured
+ Security relevant information should be captured. The eventRecordQPS
+Kubelet option can be used to limit the rate at which events are gathered.
+Setting this too low could result in relevant events not being logged,
+however the unlimited setting of 0 could result in a denial of service on
+the kubelet. Processing and storage systems should be scaled to handle the
+expected event load. To set the eventRecordQPS option for the kubelet,
+create a KubeletConfig option along these lines:
+
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ name: kubelet-config-$pool
+spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ eventRecordQPS:
+
+ The MachineConfig Operator does not merge KubeletConfig
+objects, the last object is used instead. In case you need to
+set multiple options for kubelet, consider putting all the custom
+options into a single KubeletConfig object.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.9
+ It is important to capture all events and not restrict event creation.
+Events are an important source of security information and analytics that
+ensure that your environment is consistently monitored using the event
+data.
+
+
+
@@ -7708,7 +8015,7 @@ Therefore, you need to use a tool that can query the OCP API, retrieve the follo
4.2.10
Without cryptographic integrity protections, information can be
altered by unauthorized users without detection.
-
+
CCE-83396-2
@@ -7747,6 +8054,125 @@ altered by unauthorized users without detection.
+
+ Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers
+ Ensure that the Kubelet is configured to only use strong cryptographic ciphers.
+To set the cipher suites for the kubelet, create new or modify existing
+KubeletConfig object along these lines, one for every
+MachineConfigPool:
+
+ apiVersion: machineconfiguration.openshift.io/v1
+ kind: KubeletConfig
+ metadata:
+ name: kubelet-config-$pool
+ spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ tlsCipherSuites:
+ - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
+ - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
+
+In order to configure this rule to check for an alternative cipher, both var_kubelet_tls_cipher_suites_regex
+and var_kubelet_tls_cipher_suites have to be set
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.13
+ TLS ciphers have had a number of known vulnerabilities and weaknesses,
+which can reduce the protection provided by them. By default Kubernetes
+supports a number of TLS ciphersuites including some that have security
+concerns, weakening the protection provided.
+
+ CCE-86030-4
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ tlsCipherSuites: [{{.var_kubelet_tls_cipher_suites}}]
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ tlsCipherSuites: [{{.var_kubelet_tls_cipher_suites}}]
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers
+ Ensure that the Kubelet is configured to only use strong cryptographic ciphers.
+To set the cipher suites for the kubelet, create new or modify existing
+KubeletConfig object along these lines, one for every
+MachineConfigPool:
+
+ apiVersion: machineconfiguration.openshift.io/v1
+ kind: KubeletConfig
+ metadata:
+ name: kubelet-config-$pool
+ spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ tlsCipherSuites:
+ - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+
+In order to configure this rule to check for an alternative cipher, both var_kubelet_tls_cipher_suites_regex
+and var_kubelet_tls_cipher_suites have to be set
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.13
+ TLS ciphers have had a number of known vulnerabilities and weaknesses,
+which can reduce the protection provided by them. By default Kubernetes
+supports a number of TLS ciphersuites including some that have security
+concerns, weakening the protection provided.
+
+
+
+
+
+
+
+
+
Ensure that the Ingress Controller only makes use of Strong Cryptographic Ciphers
Ensure that the Ingress Controller is configured to only use strong cryptographic ciphers.
@@ -7817,6 +8243,56 @@ spec:
+
+ Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers
+ Ensure that the Kubelet is configured to only use strong cryptographic ciphers.
+To set the cipher suites for the kubelet, create new or modify existing
+KubeletConfig object along these lines, one for every
+MachineConfigPool:
+
+ apiVersion: machineconfiguration.openshift.io/v1
+ kind: KubeletConfig
+ metadata:
+ name: kubelet-config-$pool
+ spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ tlsCipherSuites:
+ - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
+ - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
+
+In order to configure this rule to check for an alternative cipher, both var_kubelet_tls_cipher_suites_regex
+and var_kubelet_tls_cipher_suites have to be set
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.13
+ TLS ciphers have had a number of known vulnerabilities and weaknesses,
+which can reduce the protection provided by them. By default Kubernetes
+supports a number of TLS ciphersuites including some that have security
+concerns, weakening the protection provided.
+
+
+
+
+
+
+
+
+
+
+
Ensure that the OpenShift API Server Operator only makes use of Strong Cryptographic Ciphers
Ensure that the OpenShift API Server Operator is configured to only use strong cryptographic ciphers.
@@ -7873,6 +8349,8 @@ To set the cipher suites for the kubelet, create new or modify existing
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
+ - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
In order to configure this rule to check for an alternative cipher, both var_kubelet_tls_cipher_suites_regex
and var_kubelet_tls_cipher_suites have to be set
@@ -7889,17 +8367,11 @@ and var_kubelet_tls_cipher_suites have to be set
which can reduce the protection provided by them. By default Kubernetes
supports a number of TLS ciphersuites including some that have security
concerns, weakening the protection provided.
-
- ---
-# {{.var_kubelet_tls_cipher_suites_regex}} we have to put variable array name here for mutilines remediation
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- tlsCipherSuites: [{{.var_kubelet_tls_cipher_suites}}]
-
+
+
+
@@ -7934,7 +8406,7 @@ Therefore, you need to use a tool that can query the OCP API, retrieve the follo
4.2.10
Without cryptographic integrity protections, information can be
altered by unauthorized users without detection.
-
+
CCE-90614-9
@@ -7993,9 +8465,78 @@ and validation.
However, in some cases explicit overriding this parameter is
necessary to ensure that the appropriate node name stays as it is in case of
certain upgrade conditions. e.g. as is the case in AWS and OpenStack when migrating
+to external cloud providers.
+
+
+
+ kubelet - Hostname Override handling
+ Normally, OpenShift lets the kubelet get the hostname from either the
+cloud provider itself, or from the node's hostname. This ensures that
+the PKI allocated by the deployment uses the appropriate values, is valid
+and keeps working throughout the lifecycle of the cluster. IP addresses
+are not used, and hence this makes it easier for security analysts to
+associate kubelet logs with the appropriate node.
+ CIP-003-3 R6
+ CIP-004-3 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ 4.2.8
+ Allowing hostnames to be overridden creates issues around resolving nodes
+in addition to TLS configuration, certificate validation, and log correlation
+and validation.
+However, in some cases explicit overriding this parameter is
+necessary to ensure that the appropriate node name stays as it is in case of
+certain upgrade conditions. e.g. as is the case in AWS and OpenStack when migrating
to external cloud providers.
+
+ kubelet - Hostname Override handling
+ Normally, OpenShift lets the kubelet get the hostname from either the
+cloud provider itself, or from the node's hostname. This ensures that
+the PKI allocated by the deployment uses the appropriate values, is valid
+and keeps working throughout the lifecycle of the cluster. IP addresses
+are not used, and hence this makes it easier for security analysts to
+associate kubelet logs with the appropriate node.
+ CIP-003-3 R6
+ CIP-004-3 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ 4.2.8
+ Allowing hostnames to be overridden creates issues around resolving nodes
+in addition to TLS configuration, certificate validation, and log correlation
+and validation.
+However, in some cases explicit overriding this parameter is
+necessary to ensure that the appropriate node name stays as it is in case of
+certain upgrade conditions. e.g. as is the case in AWS and OpenStack when migrating
+to external cloud providers.
+
+
+
+ kubelet - Hostname Override handling
+ Normally, OpenShift lets the kubelet get the hostname from either the
+cloud provider itself, or from the node's hostname. This ensures that
+the PKI allocated by the deployment uses the appropriate values, is valid
+and keeps working throughout the lifecycle of the cluster. IP addresses
+are not used, and hence this makes it easier for security analysts to
+associate kubelet logs with the appropriate node.
+ CIP-003-3 R6
+ CIP-004-3 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ 4.2.8
+ Allowing hostnames to be overridden creates issues around resolving nodes
+in addition to TLS configuration, certificate validation, and log correlation
+and validation.
+However, in some cases explicit overriding this parameter is
+necessary to ensure that the appropriate node name stays as it is in case of
+certain upgrade conditions. e.g. as is the case in AWS and OpenStack when migrating
+to external cloud providers.
+
+
kubelet - Disable the Read-Only Port
To disable the read-only port, edit the kubelet configuration
@@ -8041,7 +8582,7 @@ system.
-
+
kubelet - Enable Certificate Rotation
To enable the kubelet to rotate client certificates, edit the kubelet configuration
file /etc/kubernetes/kubelet.conf
@@ -8051,6 +8592,8 @@ on the kubelet node(s) and set the below parameter:
rotateCertificates: true
...
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -8062,9 +8605,103 @@ rotateCertificates: true
4.2.11
Allowing the kubelet to auto-update the certificates ensure that there is no downtime
in certificate renewal as well as ensures confidentiality and integrity.
-
+
CCE-83838-3
+
+
+
+
+
+
+
+
+
+
+ kubelet - Enable Certificate Rotation
+ To enable the kubelet to rotate client certificates, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+...
+rotateCertificates: true
+...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.11
+ Allowing the kubelet to auto-update the certificates ensure that there is no downtime
+in certificate renewal as well as ensures confidentiality and integrity.
+
+
+
+
+
+
+
+
+
+ kubelet - Enable Certificate Rotation
+ To enable the kubelet to rotate client certificates, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+...
+rotateCertificates: true
+...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.11
+ Allowing the kubelet to auto-update the certificates ensure that there is no downtime
+in certificate renewal as well as ensures confidentiality and integrity.
+
+
+
+
+
+
+
+
+
+
+
+ kubelet - Enable Certificate Rotation
+ To enable the kubelet to rotate client certificates, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+...
+rotateCertificates: true
+...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.11
+ Allowing the kubelet to auto-update the certificates ensure that there is no downtime
+in certificate renewal as well as ensures confidentiality and integrity.
+
+
+
+
@@ -8082,6 +8719,8 @@ featureGates:
RotateKubeletClientCertificate: true
...
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -8093,15 +8732,81 @@ featureGates:
4.2.11
Allowing the kubelet to auto-update the certificates ensure that there is no downtime
in certificate renewal as well as ensures confidentiality and integrity.
-
+
CCE-83352-5
+
+
+
+ kubelet - Enable Client Certificate Rotation
+ To enable the kubelet to rotate client certificates, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+featureGates:
+...
+ RotateKubeletClientCertificate: true
+...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.11
+ Allowing the kubelet to auto-update the certificates ensure that there is no downtime
+in certificate renewal as well as ensures confidentiality and integrity.
+
+
+
+
+
+
+
+
+
+
+
+ kubelet - Enable Client Certificate Rotation
+ To enable the kubelet to rotate client certificates, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+featureGates:
+...
+ RotateKubeletClientCertificate: true
+...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.11
+ Allowing the kubelet to auto-update the certificates ensure that there is no downtime
+in certificate renewal as well as ensures confidentiality and integrity.
+
+
+
+
+
+
+
+
+
+
kubelet - Allow Automatic Firewall Configuration
The kubelet has the ability to automatically configure the firewall to allow
@@ -8111,6 +8816,8 @@ To allow the kubelet to modify the firewall, edit the kubelet configuration
file /etc/kubernetes/kubelet.conf
on the kubelet node(s) and set the below parameter:
makeIPTablesUtilChains: true
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -8124,21 +8831,152 @@ on the kubelet node(s) and set the below parameter:
networking traffic through. This ensures that when a pod or container is running that
the correct ports are configured as well as removing the ports when a pod or
container is no longer in existence.
-
+
CCE-83775-7
---
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
spec:
kubeletConfig:
makeIPTablesUtilChains: true
-
-
-
-
-
-
-
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ makeIPTablesUtilChains: true
+
+
+
+
+
+
+
+
+
+
+
+
+ kubelet - Allow Automatic Firewall Configuration
+ The kubelet has the ability to automatically configure the firewall to allow
+the containers required ports and connections to networking resources and destinations
+parameters potentially creating a security incident.
+To allow the kubelet to modify the firewall, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+makeIPTablesUtilChains: true
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.7
+ The kubelet should automatically configure the firewall settings to allow access and
+networking traffic through. This ensures that when a pod or container is running that
+the correct ports are configured as well as removing the ports when a pod or
+container is no longer in existence.
+
+
+
+
+
+
+
+
+
+ kubelet - Allow Automatic Firewall Configuration
+ The kubelet has the ability to automatically configure the firewall to allow
+the containers required ports and connections to networking resources and destinations
+parameters potentially creating a security incident.
+To allow the kubelet to modify the firewall, edit the kubelet configuration
+To set the makeIPTablesUtilChains option for the kubelet,
+create a KubeletConfig option along these lines:
+
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ name: kubelet-config-$pool
+spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ makeIPTablesUtilChains: true
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.7
+ The kubelet should automatically configure the firewall settings to allow access and
+networking traffic through. This ensures that when a pod or container is running that
+the correct ports are configured as well as removing the ports when a pod or
+container is no longer in existence.
+
+
+
+
+
+
+
+
+
+
+
+ kubelet - Allow Automatic Firewall Configuration
+ The kubelet has the ability to automatically configure the firewall to allow
+the containers required ports and connections to networking resources and destinations
+parameters potentially creating a security incident.
+To allow the kubelet to modify the firewall, edit the kubelet configuration
+To set the makeIPTablesUtilChains option for the kubelet,
+create a KubeletConfig option along these lines:
+
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ name: kubelet-config-$pool
+spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ makeIPTablesUtilChains: true
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.7
+ The kubelet should automatically configure the firewall settings to allow access and
+networking traffic through. This ensures that when a pod or container is running that
+the correct ports are configured as well as removing the ports when a pod or
+container is no longer in existence.
+
+
+
+
+
+
+
+
+
kubelet - Enable Protect Kernel Defaults
@@ -8854,7 +9692,7 @@ kernel behavior.
-
+
kubelet - Enable Server Certificate Rotation
To enable the kubelet to rotate server certificates, edit the kubelet configuration
file /etc/kubernetes/kubelet.conf
@@ -8865,6 +9703,8 @@ featureGates:
RotateKubeletServerCertificate: true
...
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -8876,16 +9716,178 @@ featureGates:
4.2.12
Allowing the kubelet to auto-update the certificates ensure that there is no downtime
in certificate renewal as well as ensures confidentiality and integrity.
-
+
CCE-83356-6
+
+
+
+
+
+
+
+
+
+
+ kubelet - Enable Server Certificate Rotation
+ To enable the kubelet to rotate server certificates, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+featureGates:
+...
+ RotateKubeletServerCertificate: true
+...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.12
+ Allowing the kubelet to auto-update the certificates ensure that there is no downtime
+in certificate renewal as well as ensures confidentiality and integrity.
+
+
+
+
+
+
+
+
+
+ kubelet - Enable Server Certificate Rotation
+ To enable the kubelet to rotate server certificates, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+featureGates:
+...
+ RotateKubeletServerCertificate: true
+...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.12
+ Allowing the kubelet to auto-update the certificates ensure that there is no downtime
+in certificate renewal as well as ensures confidentiality and integrity.
+
+
+
+
+
+
+
+
+
+
+
+ kubelet - Enable Server Certificate Rotation
+ To enable the kubelet to rotate server certificates, edit the kubelet configuration
+file /etc/kubernetes/kubelet.conf
+on the kubelet node(s) and set the below parameter:
+
+featureGates:
+...
+ RotateKubeletServerCertificate: true
+...
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.12
+ Allowing the kubelet to auto-update the certificates ensure that there is no downtime
+in certificate renewal as well as ensures confidentiality and integrity.
+
+
+
+
-
+
+ kubelet - Do Not Disable Streaming Timeouts
+ Timouts for streaming connections should not be disabled as they help to prevent
+denial-of-service attacks.
+To configure streaming connection timeouts
+To set the streamingConnectionIdleTimeout option for the kubelet,
+create a KubeletConfig option along these lines:
+
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ name: kubelet-config-$pool
+spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ streamingConnectionIdleTimeout:
+
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.5
+ Ensuring connections have timeouts helps to protect against denial-of-service attacks as
+well as disconnect inactive connections. In addition, setting connections timeouts helps
+to prevent from running out of ephemeral ports.
+
+ CCE-84097-5
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ streamingConnectionIdleTimeout: {{.var_streaming_connection_timeouts}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ streamingConnectionIdleTimeout: {{.var_streaming_connection_timeouts}}
+
+
+
+
+
+
+
+
+
+
+
+
+
kubelet - Do Not Disable Streaming Timeouts
Timouts for streaming connections should not be disabled as they help to prevent
denial-of-service attacks.
@@ -8906,23 +9908,99 @@ on the kubelet node(s) and set the below parameter:
well as disconnect inactive connections. In addition, setting connections timeouts helps
to prevent from running out of ephemeral ports.
- CCE-84097-5
- ---
+
+
+
+
+
+
+
+
+
+ kubelet - Do Not Disable Streaming Timeouts
+ Timouts for streaming connections should not be disabled as they help to prevent
+denial-of-service attacks.
+To configure streaming connection timeouts
+To set the streamingConnectionIdleTimeout option for the kubelet,
+create a KubeletConfig option along these lines:
+
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
+metadata:
+ name: kubelet-config-$pool
spec:
- kubeletConfig:
- streamingConnectionIdleTimeout: {{.var_streaming_connection_timeouts}}
-
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ streamingConnectionIdleTimeout:
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.5
+ Ensuring connections have timeouts helps to protect against denial-of-service attacks as
+well as disconnect inactive connections. In addition, setting connections timeouts helps
+to prevent from running out of ephemeral ports.
+
+
+
+
+
+
+
+
+
+
+
+
+ kubelet - Do Not Disable Streaming Timeouts
+ Timouts for streaming connections should not be disabled as they help to prevent
+denial-of-service attacks.
+To configure streaming connection timeouts
+To set the streamingConnectionIdleTimeout option for the kubelet,
+create a KubeletConfig option along these lines:
+
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ name: kubelet-config-$pool
+spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ streamingConnectionIdleTimeout:
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 4.2.5
+ Ensuring connections have timeouts helps to protect against denial-of-service attacks as
+well as disconnect inactive connections. In addition, setting connections timeouts helps
+to prevent from running out of ephemeral ports.
+
+
+
-
+
Ensure Eviction threshold Settings Are Set - evictionHard: imagefs.available
Two types of garbage collection are performed on an OpenShift Container Platform node:
@@ -8954,6 +10032,8 @@ This rule pertains to the imagefs.available setting of th
section.
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -8968,11 +10048,13 @@ and avoiding degraded performance and availability. In the worst case, the
system might crash or just be unusable for a long period of time.
Based on your system resources and tests, choose an appropriate threshold
value to activate garbage collection.
-
CCE-84144-5
- ---
+ ---
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
spec:
kubeletConfig:
evictionHard:
@@ -8980,19 +10062,45 @@ spec:
---
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionHard:
+ imagefs.available: {{.var_kubelet_evictionhard_imagefs_available}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
spec:
kubeletConfig:
evictionPressureTransitionPeriod: 0s
-
+
+
+
+
+
-
+
-
- Ensure Eviction threshold Settings Are Set - evictionHard: imagefs.inodesFree
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: imagefs.available
Two types of garbage collection are performed on an OpenShift Container Platform node:
@@ -9019,7 +10127,7 @@ To configure, follow the directions in
the documentation
-This rule pertains to the imagefs.inodesFree setting of the evictionHard
+This rule pertains to the imagefs.available setting of the evictionHard
section.
@@ -9038,30 +10146,15 @@ system might crash or just be unusable for a long period of time.
Based on your system resources and tests, choose an appropriate threshold
value to activate garbage collection.
- CCE-84147-8
- ---
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionHard:
- imagefs.inodesFree: {{.var_kubelet_evictionhard_imagefs_inodesfree}}
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionPressureTransitionPeriod: 0s
-
-
+
-
+
-
- Ensure Eviction threshold Settings Are Set - evictionHard: memory.available
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: imagefs.available
Two types of garbage collection are performed on an OpenShift Container Platform node:
@@ -9088,7 +10181,7 @@ To configure, follow the directions in
the documentation
-This rule pertains to the memory.available setting of the evictionHard
+This rule pertains to the imagefs.available setting of the evictionHard
section.
@@ -9106,31 +10199,18 @@ and avoiding degraded performance and availability. In the worst case, the
system might crash or just be unusable for a long period of time.
Based on your system resources and tests, choose an appropriate threshold
value to activate garbage collection.
-
- CCE-84135-3
- ---
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionHard:
- memory.available: {{.var_kubelet_evictionhard_memory_available}}
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionPressureTransitionPeriod: 0s
-
-
+
+
+
+
-
+
-
- Ensure Eviction threshold Settings Are Set - evictionHard: nodefs.available
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: imagefs.available
Two types of garbage collection are performed on an OpenShift Container Platform node:
@@ -9154,10 +10234,10 @@ Machine Config Pool using any combination of the following:
To configure, follow the directions in
-the documentation
+the documentation
-This rule pertains to the nodefs.available setting of the evictionHard
+This rule pertains to the imagefs.available setting of the evictionHard
section.
@@ -9175,31 +10255,18 @@ and avoiding degraded performance and availability. In the worst case, the
system might crash or just be unusable for a long period of time.
Based on your system resources and tests, choose an appropriate threshold
value to activate garbage collection.
-
- CCE-84138-7
- ---
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionHard:
- nodefs.available: {{.var_kubelet_evictionhard_nodefs_available}}
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionPressureTransitionPeriod: 0s
-
-
+
+
+
+
-
+
-
- Ensure Eviction threshold Settings Are Set - evictionHard: nodefs.inodesFree
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: imagefs.inodesFree
Two types of garbage collection are performed on an OpenShift Container Platform node:
@@ -9226,10 +10293,12 @@ To configure, follow the directions in
the documentation
-This rule pertains to the nodefs.inodesFree setting of the evictionHard
+This rule pertains to the imagefs.inodesFree setting of the evictionHard
section.
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -9244,31 +10313,59 @@ and avoiding degraded performance and availability. In the worst case, the
system might crash or just be unusable for a long period of time.
Based on your system resources and tests, choose an appropriate threshold
value to activate garbage collection.
-
- CCE-84141-1
- ---
+ CCE-84147-8
+ ---
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
spec:
kubeletConfig:
evictionHard:
- nodefs.inodesFree: {{.var_kubelet_evictionhard_nodefs_inodesfree}}
+ imagefs.inodesFree: {{.var_kubelet_evictionhard_imagefs_inodesfree}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionHard:
+ imagefs.inodesFree: {{.var_kubelet_evictionhard_imagefs_inodesfree}}
---
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
spec:
kubeletConfig:
evictionPressureTransitionPeriod: 0s
-
+
+
+
+
+
-
+
-
- Ensure Eviction threshold Settings Are Set - evictionSoft: imagefs.available
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: imagefs.inodesFree
Two types of garbage collection are performed on an OpenShift Container Platform node:
@@ -9295,7 +10392,7 @@ To configure, follow the directions in
the documentation
-This rule pertains to the imagefs.available setting of the evictionSoft
+This rule pertains to the imagefs.inodesFree setting of the evictionHard
section.
@@ -9314,37 +10411,15 @@ system might crash or just be unusable for a long period of time.
Based on your system resources and tests, choose an appropriate threshold
value to activate garbage collection.
- CCE-84127-0
- ---
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionSoft:
- imagefs.available: {{.var_kubelet_evictionsoft_imagefs_available}}
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionSoftGracePeriod:
- imagefs.available: "1m30s"
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionPressureTransitionPeriod: 0s
-
-
+
-
+
-
- Ensure Eviction threshold Settings Are Set - evictionSoft: imagefs.inodesFree
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: imagefs.inodesFree
Two types of garbage collection are performed on an OpenShift Container Platform node:
@@ -9371,7 +10446,7 @@ To configure, follow the directions in
the documentation
-This rule pertains to the imagefs.inodesFree setting of the evictionSoft
+This rule pertains to the imagefs.inodesFree setting of the evictionHard
section.
@@ -9389,38 +10464,18 @@ and avoiding degraded performance and availability. In the worst case, the
system might crash or just be unusable for a long period of time.
Based on your system resources and tests, choose an appropriate threshold
value to activate garbage collection.
-
- CCE-84132-0
- ---
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionSoft:
- imagefs.inodesFree: {{.var_kubelet_evictionsoft_imagefs_inodesfree}}
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionSoftGracePeriod:
- imagefs.inodesFree: "1m30s"
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionPressureTransitionPeriod: 0s
-
-
+
+
+
+
-
+
-
- Ensure Eviction threshold Settings Are Set - evictionSoft: memory.available
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: imagefs.inodesFree
Two types of garbage collection are performed on an OpenShift Container Platform node:
@@ -9447,7 +10502,7 @@ To configure, follow the directions in
the documentation
-This rule pertains to the memory.available setting of the evictionSoft
+This rule pertains to the imagefs.inodesFree setting of the evictionHard
section.
@@ -9465,38 +10520,18 @@ and avoiding degraded performance and availability. In the worst case, the
system might crash or just be unusable for a long period of time.
Based on your system resources and tests, choose an appropriate threshold
value to activate garbage collection.
-
- CCE-84222-9
- ---
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionSoft:
- memory.available: {{.var_kubelet_evictionsoft_memory_available}}
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionSoftGracePeriod:
- memory.available: "1m30s"
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionPressureTransitionPeriod: 0s
-
-
+
+
+
+
-
+
-
- Ensure Eviction threshold Settings Are Set - evictionSoft: nodefs.available
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: memory.available
Two types of garbage collection are performed on an OpenShift Container Platform node:
@@ -9520,13 +10555,15 @@ Machine Config Pool using any combination of the following:
To configure, follow the directions in
-the documentation
+the documentation
-This rule pertains to the nodefs.available setting of the evictionSoft
+This rule pertains to the memory.available setting of the evictionHard
section.
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -9541,38 +10578,59 @@ and avoiding degraded performance and availability. In the worst case, the
system might crash or just be unusable for a long period of time.
Based on your system resources and tests, choose an appropriate threshold
value to activate garbage collection.
-
- CCE-84119-7
- ---
+ CCE-84135-3
+ ---
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
spec:
kubeletConfig:
- evictionSoft:
- nodefs.available: {{.var_kubelet_evictionsoft_nodefs_available}}
+ evictionHard:
+ memory.available: {{.var_kubelet_evictionhard_memory_available}}
---
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
spec:
kubeletConfig:
- evictionSoftGracePeriod:
- nodefs.available: "1m30s"
+ evictionPressureTransitionPeriod: 0s
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionHard:
+ memory.available: {{.var_kubelet_evictionhard_memory_available}}
---
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
spec:
kubeletConfig:
evictionPressureTransitionPeriod: 0s
-
+
+
+
+
+
-
+
-
- Ensure Eviction threshold Settings Are Set - evictionSoft: nodefs.inodesFree
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: memory.available
Two types of garbage collection are performed on an OpenShift Container Platform node:
@@ -9599,7 +10657,7 @@ To configure, follow the directions in
the documentation
-This rule pertains to the nodefs.inodesFree setting of the evictionSoft
+This rule pertains to the memory.available setting of the evictionHard
section.
@@ -9618,1782 +10676,1259 @@ system might crash or just be unusable for a long period of time.
Based on your system resources and tests, choose an appropriate threshold
value to activate garbage collection.
- CCE-84123-9
- ---
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionSoft:
- nodefs.inodesFree: {{.var_kubelet_evictionsoft_nodefs_inodesfree}}
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionSoftGracePeriod:
- nodefs.inodesFree: "1m30s"
----
-apiVersion: machineconfiguration.openshift.io/v1
-kind: KubeletConfig
-spec:
- kubeletConfig:
- evictionPressureTransitionPeriod: 0s
-
-
+
-
+
-
- kubelet - Ensure that the --read-only-port is secured
- Disable the read-only port.
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: imagefs.inodesFree
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the imagefs.inodesFree setting of the evictionHard
+section.
+
+
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- The Kubelet process provides a read-only API in addition to the main Kubelet API.
-Unauthenticated access is provided to this read-only API which could possibly retrieve
-potentially sensitive information about the cluster.
-
-
-
-
-
-
-
-
-
- KubeletTest
- KubeletTest
- SC-8
- SC-8(1)
- SC-8(2)
- 4.2.10
- Test KubeletTest
-
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
-
+
-
+
-
- KubeletTest
- KubeletTest
- SC-8
- SC-8(1)
- SC-8(2)
- 4.2.10
- Test KubeletTest
-
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: memory.available
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the memory.available setting of the evictionHard
+section.
+
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
-
+
+
-
+
-
+
-
-
- OpenShift - Logging Settings
- Contains evaluations for the cluster's logging configuration settings.
-
- Configure the OpenShift Audit Profile
- Audit log profiles define how to log requests that come to the OpenShift
-API server, the Kubernetes API server, and the OAuth API server.
- Default
- Default
- WriteRequestBodies
- AllRequestBodies
-
-
- Ensure that Audit Log Errors Emit Alerts
-
-OpenShift audit works at the API server level, logging all requests coming to the server.
-However, if API server instance is unable to write errors, an alert must be issued
-in order for the organization to take a relevant action. e.g. shutting down that instance.
-
-Kubernetes by default has metrics that enable one to write such alerts:
-apiserver_audit_event_totalapiserver_audit_error_total
-
-Such an example is shipped in OCP 4.9+
-
-
-apiVersion: monitoring.coreos.com/v1
-kind: PrometheusRule
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: nodefs.available
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.available setting of the evictionHard
+section.
+
+
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+ CCE-84138-7
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
metadata:
- name: audit-errors
- namespace: openshift-kube-apiserver
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
spec:
- groups:
- - name: apiserver-audit
- rules:
- - alert: AuditLogError
- annotations:
- summary: |-
- An API Server instance was unable to write audit logs. This could be
- triggered by the node running out of space, or a malicious actor
- tampering with the audit logs.
- description: An API Server had an error writing to an audit log.
- expr: |
- sum by (apiserver,instance)(rate(apiserver_audit_error_total{apiserver=~".+-apiserver"}[5m])) / sum by (apiserver,instance) (rate(apiserver_audit_event_total{apiserver=~".+-apiserver"}[5m])) > 0
- for: 1m
- labels:
- severity: warning
-
-
-
-For more information, consult the
-official Kubernetes documentation.
-
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/monitoring.coreos.com/v1/prometheusrules?limit=500
- API endpoint, filter with with the jq utility using the following filter
- [.items[].spec.groups[].rules[].expr]
- and persist it to the local
- /apis/monitoring.coreos.com/v1/prometheusrules?limit=500#72e9ad360bb6bdf4ad9e43217cd0ec9cb90e7c3b08d4fbe0edf087ad899e05a6
- file.
-
- CIP-003-8 R5.1.1
- CIP-003-8 R5.3
- CIP-004-6 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.1
- CIP-007-3 R5.1.2
- AU-5
- SRG-APP-000109-CTR-000215
- When there are errors writing audit logs, security events will not be logged
-by that specific API Server instance. Security Incident Response teams use
-these audit logs, amongst other artifacts, to determine the impact of
-security breaches or events. Without these logs, it becomes very difficult
-to assess a situation and do appropriate root cause analysis in such incidents.
- CCE-90744-4
- ---
-apiVersion: monitoring.coreos.com/v1
-kind: PrometheusRule
+ kubeletConfig:
+ evictionHard:
+ nodefs.available: {{.var_kubelet_evictionhard_nodefs_available}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
metadata:
- name: audit-errors
- namespace: openshift-kube-apiserver
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
spec:
- groups:
- - name: apiserver-audit
- rules:
- - alert: AuditLogError
- annotations:
- summary: |-
- An API Server instance was unable to write audit logs. This could be
- triggered by the node running out of space, or a malicious actor
- tampering with the audit logs.
- description: An API Server had an error writing to an audit log.
- expr: |
- sum by (apiserver,instance)(rate(apiserver_audit_error_total{apiserver=~".+-apiserver"}[5m])) / sum by (apiserver,instance) (rate(apiserver_audit_event_total{apiserver=~".+-apiserver"}[5m])) > 0
- for: 1m
- labels:
- severity: warning
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionHard:
+ nodefs.available: {{.var_kubelet_evictionhard_nodefs_available}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+
+
+
-
+
-
+
-
- Ensure that Audit Log Forwarding Uses TLS
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: nodefs.available
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
-OpenShift audit works at the API server level, logging all requests coming to the server.
-Audit is on by default and the best practice is to ship audit logs off the cluster for retention
-using a secure protocol.
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
-The cluster-logging-operator is able to do this with the ClusterLogForwarders resource.
-The forementioned resource can be configured to logs to different third party systems.
-For more information on this, please reference the official documentation:
-
- https://docs.openshift.com/container-platform/latest/logging/cluster-logging-external.html
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.available setting of the evictionHard
+section.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the .
-This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance
- API endpoint, filter with with the jq utility using the following filter
- try [.spec.outputs[].url] catch []
- and persist it to the local
- /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance#71786452ba18c51ba8ad51472a078619e2e8b52a86cd75087af5aab42400f6c0
- file.
-
- CIP-003-8 R5.2
- CIP-004-6 R3.3
- CIP-007-3 R6.5
- AU-9
- AU-9(2)
- AU-9(3)
- AU-10
- SRG-APP-000118-CTR-000240
- SRG-APP-000119-CTR-000245
- SRG-APP-000120-CTR-000250
- SRG-APP-000121-CTR-000255
- SRG-APP-000122-CTR-000260
- SRG-APP-000123-CTR-000265
- SRG-APP-000126-CTR-000275
- SRG-APP-000290-CTR-000670
- It is necessary to ensure that any configured output uses the TLS protocol to receive
-logs in order to ensure the confidentiality, integrity and authenticity of the logs.
- CCE-90688-3
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+
-
-
+
-
+
-
- Ensure that the cluster's audit profile is properly set
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: nodefs.available
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
-OpenShift can audit the details of requests made to the API server through
-the standard Kubernetes audit capabilities.
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
-In OpenShift, auditing of the API Server is on by default. Audit provides a
-security-relevant chronological set of records documenting the sequence of
-activities that have affected system by individual users, administrators, or
-other components of the system. Audit works at the API server level, logging
-all requests coming to the server. Each audit log contains two entries:
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
-The request line containing:
+This rule pertains to the nodefs.available setting of the evictionHard
+section.
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+
+
+
+
+
+
+
+
+
+
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: nodefs.available
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
- A Unique ID allowing to match the response line (see #2)
- The source IP of the request
- The HTTP method being invoked
- The original user invoking the operation
- The impersonated user for the operation (self meaning himself)
- The impersonated group for the operation (lookup meaning user's group)
- The namespace of the request or none
- The URI as requested
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
-The response line containing:
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
- The aforementioned unique ID
- The response code
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
-For more information on how to configure the audit profile, please visit
-the documentation
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.available setting of the evictionHard
+section.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/apiservers/cluster API endpoint to the local /apis/config.openshift.io/v1/apiservers/cluster file.
- CIP-003-8 R4
- CIP-003-8 R4.1
- CIP-003-8 R4.2
- CIP-003-8 R5.2
CIP-003-8 R6
- CIP-004-6 R2.2.2
- CIP-004-6 R2.2.3
- CIP-004-6 R3.3
- CIP-007-3 R.1.3
- CIP-007-3 R5
- CIP-007-3 R5.1.1
- CIP-007-3 R5.2
- CIP-007-3 R5.3.1
- CIP-007-3 R5.3.2
- CIP-007-3 R5.3.3
- CIP-007-3 R6.5
- AU-2
- AU-3
- AU-3(1)
- AU-6
- AU-6(1)
- AU-7
- AU-7(1)
- AU-8
- AU-8(1)
- AU-9
- AU-12
- AU-12(1)
- CM-5(1)
- SI-11
- SI-12
- SI-4(20)
- SI-4(23)
- Req-2.2
- Req-12.5.5
- SRG-APP-000089-CTR-000150
- SRG-APP-000090-CTR-000155
- SRG-APP-000091-CTR-000160
- SRG-APP-000095-CTR-000170
- SRG-APP-000096-CTR-000175
- SRG-APP-000097-CTR-000180
- SRG-APP-000098-CTR-000185
- SRG-APP-000099-CTR-000190
- SRG-APP-000100-CTR-000195
- SRG-APP-000100-CTR-000200
- SRG-APP-000101-CTR-000205
- SRG-APP-000116-CTR-000235
- SRG-APP-000118-CTR-000240
- SRG-APP-000119-CTR-000245
- SRG-APP-000120-CTR-000250
- SRG-APP-000121-CTR-000255
- SRG-APP-000122-CTR-000260
- SRG-APP-000123-CTR-000265
- SRG-APP-000181-CTR-000485
- SRG-APP-000266-CTR-000625
- SRG-APP-000374-CTR-000865
- SRG-APP-000375-CTR-000870
- SRG-APP-000380-CTR-000900
- SRG-APP-000381-CTR-000905
- SRG-APP-000492-CTR-001220
- SRG-APP-000493-CTR-001225
- SRG-APP-000494-CTR-001230
- SRG-APP-000495-CTR-001235
- SRG-APP-000496-CTR-001240
- SRG-APP-000497-CTR-001245
- SRG-APP-000498-CTR-001250
- SRG-APP-000499-CTR-001255
- SRG-APP-000500-CTR-001260
- SRG-APP-000501-CTR-001265
- SRG-APP-000502-CTR-001270
- SRG-APP-000503-CTR-001275
- SRG-APP-000504-CTR-001280
- SRG-APP-000505-CTR-001285
- SRG-APP-000506-CTR-001290
- SRG-APP-000507-CTR-001295
- SRG-APP-000508-CTR-001300
- SRG-APP-000509-CTR-001305
- SRG-APP-000510-CTR-001310
- 3.2.1
- 3.2.2
- Logging is an important detective control for all systems, to detect potential
-unauthorised access.
- CCE-83577-7
- ---
-apiVersion: config.openshift.io/v1
-kind: APIServer
-metadata:
- name: cluster
-spec:
- audit:
- profile: {{.var_openshift_audit_profile}}
-
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
-
+
-
+
-
- Ensure that OpenShift Logging Operator is scanning the cluster
- OpenShift Logging Operator provides ability to aggregate all the logs from the
-OpenShift Container Platform cluster, such as node system audit logs, application
-container logs, and infrastructure logs. OpenShift Logging aggregates these logs
-from throughout OpenShift cluster and stores them in a default log store. [1]
-
-[1]https://docs.openshift.com/container-platform/4.10/logging/cluster-logging.html
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance API endpoint to the local /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance file.
- AU-3(2)
- SRG-APP-000092-CTR-000165
- SRG-APP-000111-CTR-000220
- OpenShift Logging Operator is able to collect, aggregate, and manage logs.
- CCE-85918-1
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: nodefs.inodesFree
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.inodesFree setting of the evictionHard
+section.
+
+
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+ CCE-84141-1
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionHard:
+ nodefs.inodesFree: {{.var_kubelet_evictionhard_nodefs_inodesfree}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionHard:
+ nodefs.inodesFree: {{.var_kubelet_evictionhard_nodefs_inodesfree}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+
+
+
+
-
+
-
+
-
- Record Access Events to Kubernetes Audit Log Directory
- The audit system should collect access events to read the Kubernetes audit log directory.
-The following audit rule will assure that access to audit log directory are
-collected.
--a always,exit -F dir=/var/log/kube-apiserver/ -F perm=r -F auid>=1000 -F auid!=unset -F key=access-audit-trail
-If the auditd daemon is configured to use the augenrules
-program to read audit rules during daemon startup (the default), add the
-rule to a file with suffix .rules in the directory
-/etc/audit/rules.d.
-If the auditd daemon is configured to use the auditctl
-utility to read audit rules during daemon startup, add the rule to
-/etc/audit/audit.rules file.
- AU-2(d)
- AU-12(c)
- AC-6(9)
- CM-6(a)
- SRG-APP-000343-CTR-000780
- Attempts to read the logs should be recorded, suspicious access to audit log files could be an indicator of malicious activity on a system.
-Auditing these events could serve as evidence of potential system compromise.'
-
- CCE-83640-3
- ---
-#
-
-apiVersion: machineconfiguration.openshift.io/v1
-kind: MachineConfig
-spec:
- config:
- ignition:
- version: 3.1.0
- storage:
- files:
- - contents:
- source: data:,{{ -a%20always%2Cexit%20-F%20dir%3D/var/log/kube-apiserver/%20-F%20perm%3Dr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Daccess-audit-trail%0A }}
- mode: 0600
- path: /etc/audit/rules.d/30-access-var-log-kube-audit.rules
- overwrite: true
-
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: nodefs.inodesFree
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.inodesFree setting of the evictionHard
+section.
+
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+
-
+
-
+
-
- Record Access Events to OAuth Audit Log Directory
- The audit system should collect access events to read the OAuth audit log directory.
-The following audit rule will assure that access to audit log directory are
-collected.
--a always,exit -F dir=/var/log/oauth-apiserver/ -F perm=r -F auid>=1000 -F auid!=unset -F key=access-audit-trail
-If the auditd daemon is configured to use the augenrules
-program to read audit rules during daemon startup (the default), add the
-rule to a file with suffix .rules in the directory
-/etc/audit/rules.d.
-If the auditd daemon is configured to use the auditctl
-utility to read audit rules during daemon startup, add the rule to
-/etc/audit/audit.rules file.
- AU-2(d)
- AU-12(c)
- AC-6(9)
- CM-6(a)
- SRG-APP-000343-CTR-000780
- Attempts to read the logs should be recorded, suspicious access to audit log files could be an indicator of malicious activity on a system.
-Auditing these events could serve as evidence of potential system compromise.'
-
- CCE-90631-3
- ---
-#
-
-apiVersion: machineconfiguration.openshift.io/v1
-kind: MachineConfig
-spec:
- config:
- ignition:
- version: 3.1.0
- storage:
- files:
- - contents:
- source: data:,{{ -a%20always%2Cexit%20-F%20dir%3D/var/log/oauth-apiserver/%20-F%20perm%3Dr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Daccess-audit-trail%0A }}
- mode: 0600
- path: /etc/audit/rules.d/30-access-var-log-oauth-audit.rules
- overwrite: true
-
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: nodefs.inodesFree
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.inodesFree setting of the evictionHard
+section.
+
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- Record Access Events to OpenShift Audit Log Directory
- The audit system should collect access events to read the OpenShift audit log directory.
-The following audit rule will assure that access to audit log directory are
-collected.
--a always,exit -F dir=/var/log/openshift-apiserver/ -F perm=r -F auid>=1000 -F auid!=unset -F key=access-audit-trail
-If the auditd daemon is configured to use the augenrules
-program to read audit rules during daemon startup (the default), add the
-rule to a file with suffix .rules in the directory
-/etc/audit/rules.d.
-If the auditd daemon is configured to use the auditctl
-utility to read audit rules during daemon startup, add the rule to
-/etc/audit/audit.rules file.
- AU-2(d)
- AU-12(c)
- AC-6(9)
- CM-6(a)
- SRG-APP-000343-CTR-000780
- Attempts to read the logs should be recorded, suspicious access to audit log files could be an indicator of malicious activity on a system.
-Auditing these events could serve as evidence of potential system compromise.'
-
- CCE-90632-1
- ---
-#
-
-apiVersion: machineconfiguration.openshift.io/v1
-kind: MachineConfig
-spec:
- config:
- ignition:
- version: 3.1.0
- storage:
- files:
- - contents:
- source: data:,{{ -a%20always%2Cexit%20-F%20dir%3D/var/log/openshift-apiserver/%20-F%20perm%3Dr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Daccess-audit-trail%0A }}
- mode: 0600
- path: /etc/audit/rules.d/30-access-var-log-ocp-audit.rules
- overwrite: true
-
+
+ Ensure Eviction threshold Settings Are Set - evictionHard: nodefs.inodesFree
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.inodesFree setting of the evictionHard
+section.
+
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- The Kubernetes Audit Logs Directory Must Have Mode 0700
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: imagefs.available
-To properly set the permissions of /var/log/kube-apiserver/, run the command:
-$ sudo chmod 0700 /var/log/kube-apiserver/
- 1
- 11
- 12
- 13
- 14
- 15
- 16
- 18
- 19
- 3
- 4
- 5
- 6
- 7
- 8
- APO01.06
- APO11.04
- APO12.06
- BAI03.05
- BAI08.02
- DSS02.02
- DSS02.04
- DSS02.07
- DSS03.01
- DSS05.04
- DSS05.07
- DSS06.02
- MEA02.01
- 4.2.3.10
- 4.3.3.3.9
- 4.3.3.5.8
- 4.3.3.7.3
- 4.3.4.4.7
- 4.3.4.5.6
- 4.3.4.5.7
- 4.3.4.5.8
- 4.4.2.1
- 4.4.2.2
- 4.4.2.4
- SR 2.1
- SR 2.10
- SR 2.11
- SR 2.12
- SR 2.8
- SR 2.9
- SR 5.2
- SR 6.1
- A.10.1.1
- A.11.1.4
- A.11.1.5
- A.11.2.1
- A.12.4.1
- A.12.4.2
- A.12.4.3
- A.12.4.4
- A.12.7.1
- A.13.1.1
- A.13.1.3
- A.13.2.1
- A.13.2.3
- A.13.2.4
- A.14.1.2
- A.14.1.3
- A.16.1.4
- A.16.1.5
- A.16.1.7
- A.6.1.2
- A.7.1.1
- A.7.1.2
- A.7.3.1
- A.8.2.2
- A.8.2.3
- A.9.1.1
- A.9.1.2
- A.9.2.3
- A.9.4.1
- A.9.4.4
- A.9.4.5
- CIP-003-8 R5.1.1
- CIP-003-8 R5.2
- CIP-003-8 R5.3
- CIP-004-6 R2.3
- CIP-004-6 R3.3
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.1
- CIP-007-3 R5.1.2
- CIP-007-3 R6.5
- CM-6(a)
- AC-6(1)
- AU-9
- DE.AE-3
- DE.AE-5
- PR.AC-4
- PR.DS-5
- PR.PT-1
- RS.AN-1
- RS.AN-4
- Req-10.5.2
- SRG-APP-000118-CTR-000240
- SRG-APP-000119-CTR-000245
- SRG-APP-000120-CTR-000250
- SRG-APP-000121-CTR-000255
- SRG-APP-000122-CTR-000260
- SRG-APP-000123-CTR-000265
- If users can write to audit logs, audit trails can be modified or destroyed.
-
- CCE-83645-2
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the imagefs.available setting of the evictionSoft
+section.
+
+
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+ CCE-84127-0
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionSoft:
+ imagefs.available: {{.var_kubelet_evictionsoft_imagefs_available}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionSoftGracePeriod:
+ imagefs.available: "1m30s"
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionSoft:
+ imagefs.available: {{.var_kubelet_evictionsoft_imagefs_available}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionSoftGracePeriod:
+ imagefs.available: "1m30s"
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+
-
+
+
+
+
+
-
+
-
- The OAuth Audit Logs Directory Must Have Mode 0700
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: imagefs.available
-To properly set the permissions of /var/log/oauth-apiserver/, run the command:
-$ sudo chmod 0700 /var/log/oauth-apiserver/
- 1
- 11
- 12
- 13
- 14
- 15
- 16
- 18
- 19
- 3
- 4
- 5
- 6
- 7
- 8
- APO01.06
- APO11.04
- APO12.06
- BAI03.05
- BAI08.02
- DSS02.02
- DSS02.04
- DSS02.07
- DSS03.01
- DSS05.04
- DSS05.07
- DSS06.02
- MEA02.01
- 4.2.3.10
- 4.3.3.3.9
- 4.3.3.5.8
- 4.3.3.7.3
- 4.3.4.4.7
- 4.3.4.5.6
- 4.3.4.5.7
- 4.3.4.5.8
- 4.4.2.1
- 4.4.2.2
- 4.4.2.4
- SR 2.1
- SR 2.10
- SR 2.11
- SR 2.12
- SR 2.8
- SR 2.9
- SR 5.2
- SR 6.1
- A.10.1.1
- A.11.1.4
- A.11.1.5
- A.11.2.1
- A.12.4.1
- A.12.4.2
- A.12.4.3
- A.12.4.4
- A.12.7.1
- A.13.1.1
- A.13.1.3
- A.13.2.1
- A.13.2.3
- A.13.2.4
- A.14.1.2
- A.14.1.3
- A.16.1.4
- A.16.1.5
- A.16.1.7
- A.6.1.2
- A.7.1.1
- A.7.1.2
- A.7.3.1
- A.8.2.2
- A.8.2.3
- A.9.1.1
- A.9.1.2
- A.9.2.3
- A.9.4.1
- A.9.4.4
- A.9.4.5
- CIP-003-8 R5.1.1
- CIP-003-8 R5.2
- CIP-003-8 R5.3
- CIP-004-6 R2.3
- CIP-004-6 R3.3
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.1
- CIP-007-3 R5.1.2
- CIP-007-3 R6.5
- CM-6(a)
- AC-6(1)
- AU-9
- DE.AE-3
- DE.AE-5
- PR.AC-4
- PR.DS-5
- PR.PT-1
- RS.AN-1
- RS.AN-4
- Req-10.5.2
- SRG-APP-000118-CTR-000240
- SRG-APP-000119-CTR-000245
- SRG-APP-000120-CTR-000250
- SRG-APP-000121-CTR-000255
- SRG-APP-000122-CTR-000260
- SRG-APP-000123-CTR-000265
- If users can write to audit logs, audit trails can be modified or destroyed.
-
- CCE-90633-9
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the imagefs.available setting of the evictionSoft
+section.
+
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+
-
+
-
+
-
- The OpenShift Audit Logs Directory Must Have Mode 0700
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: imagefs.available
-To properly set the permissions of /var/log/openshift-apiserver/, run the command:
-$ sudo chmod 0700 /var/log/openshift-apiserver/
- 1
- 11
- 12
- 13
- 14
- 15
- 16
- 18
- 19
- 3
- 4
- 5
- 6
- 7
- 8
- APO01.06
- APO11.04
- APO12.06
- BAI03.05
- BAI08.02
- DSS02.02
- DSS02.04
- DSS02.07
- DSS03.01
- DSS05.04
- DSS05.07
- DSS06.02
- MEA02.01
- 4.2.3.10
- 4.3.3.3.9
- 4.3.3.5.8
- 4.3.3.7.3
- 4.3.4.4.7
- 4.3.4.5.6
- 4.3.4.5.7
- 4.3.4.5.8
- 4.4.2.1
- 4.4.2.2
- 4.4.2.4
- SR 2.1
- SR 2.10
- SR 2.11
- SR 2.12
- SR 2.8
- SR 2.9
- SR 5.2
- SR 6.1
- A.10.1.1
- A.11.1.4
- A.11.1.5
- A.11.2.1
- A.12.4.1
- A.12.4.2
- A.12.4.3
- A.12.4.4
- A.12.7.1
- A.13.1.1
- A.13.1.3
- A.13.2.1
- A.13.2.3
- A.13.2.4
- A.14.1.2
- A.14.1.3
- A.16.1.4
- A.16.1.5
- A.16.1.7
- A.6.1.2
- A.7.1.1
- A.7.1.2
- A.7.3.1
- A.8.2.2
- A.8.2.3
- A.9.1.1
- A.9.1.2
- A.9.2.3
- A.9.4.1
- A.9.4.4
- A.9.4.5
- CIP-003-8 R5.1.1
- CIP-003-8 R5.2
- CIP-003-8 R5.3
- CIP-004-6 R2.3
- CIP-004-6 R3.3
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.1
- CIP-007-3 R5.1.2
- CIP-007-3 R6.5
- CM-6(a)
- AC-6(1)
- AU-9
- DE.AE-3
- DE.AE-5
- PR.AC-4
- PR.DS-5
- PR.PT-1
- RS.AN-1
- RS.AN-4
- Req-10.5.2
- SRG-APP-000118-CTR-000240
- SRG-APP-000119-CTR-000245
- SRG-APP-000120-CTR-000250
- SRG-APP-000121-CTR-000255
- SRG-APP-000122-CTR-000260
- SRG-APP-000123-CTR-000265
- If users can write to audit logs, audit trails can be modified or destroyed.
-
- CCE-90634-7
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the imagefs.available setting of the evictionSoft
+section.
+
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- Kubernetes Audit Logs Must Be Owned By Root
- All audit logs must be owned by root user and group. By default, the path for the Kubernetes audit log is /var/log/kube-apiserver/.
-
-To properly set the owner of /var/log/kube-apiserver, run the command:
-$ sudo chown root /var/log/kube-apiserver
-
-To properly set the owner of /var/log/kube-apiserver/*, run the command:
-$ sudo chown root /var/log/kube-apiserver/*
- 1
- 11
- 12
- 13
- 14
- 15
- 16
- 18
- 19
- 3
- 4
- 5
- 6
- 7
- 8
- 5.4.1.1
- APO01.06
- APO11.04
- APO12.06
- BAI03.05
- BAI08.02
- DSS02.02
- DSS02.04
- DSS02.07
- DSS03.01
- DSS05.04
- DSS05.07
- DSS06.02
- MEA02.01
- 3.3.1
- CCI-000162
- CCI-000163
- CCI-000164
- CCI-001314
- 4.2.3.10
- 4.3.3.3.9
- 4.3.3.5.8
- 4.3.3.7.3
- 4.3.4.4.7
- 4.3.4.5.6
- 4.3.4.5.7
- 4.3.4.5.8
- 4.4.2.1
- 4.4.2.2
- 4.4.2.4
- SR 2.1
- SR 2.10
- SR 2.11
- SR 2.12
- SR 2.8
- SR 2.9
- SR 5.2
- SR 6.1
- A.10.1.1
- A.11.1.4
- A.11.1.5
- A.11.2.1
- A.12.4.1
- A.12.4.2
- A.12.4.3
- A.12.4.4
- A.12.7.1
- A.13.1.1
- A.13.1.3
- A.13.2.1
- A.13.2.3
- A.13.2.4
- A.14.1.2
- A.14.1.3
- A.16.1.4
- A.16.1.5
- A.16.1.7
- A.6.1.2
- A.7.1.1
- A.7.1.2
- A.7.3.1
- A.8.2.2
- A.8.2.3
- A.9.1.1
- A.9.1.2
- A.9.2.3
- A.9.4.1
- A.9.4.4
- A.9.4.5
- CIP-003-8 R5.1.1
- CIP-003-8 R5.3
- CIP-004-6 R2.3
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.1
- CIP-007-3 R5.1.2
- CM-6(a)
- AC-6(1)
- AU-9(4)
- DE.AE-3
- DE.AE-5
- PR.AC-4
- PR.DS-5
- PR.PT-1
- RS.AN-1
- RS.AN-4
- Req-10.5.2
- SRG-OS-000057-GPOS-00027
- SRG-OS-000058-GPOS-00028
- SRG-OS-000059-GPOS-00029
- SRG-OS-000206-GPOS-00084
- Unauthorized disclosure of audit records can reveal system and configuration data to
-attackers, thus compromising its confidentiality.
-
- CCE-83650-2
-
-
-
-
-
-
-
-
- OAuth Audit Logs Must Be Owned By Root
- All audit logs must be owned by root user and group. By default, the path for the OAuth audit log is /var/log/oauth-apiserver/.
-
-To properly set the owner of /var/log/oauth-apiserver, run the command:
-$ sudo chown root /var/log/oauth-apiserver
-
-To properly set the owner of /var/log/oauth-apiserver/*, run the command:
-$ sudo chown root /var/log/oauth-apiserver/*
- 1
- 11
- 12
- 13
- 14
- 15
- 16
- 18
- 19
- 3
- 4
- 5
- 6
- 7
- 8
- 5.4.1.1
- APO01.06
- APO11.04
- APO12.06
- BAI03.05
- BAI08.02
- DSS02.02
- DSS02.04
- DSS02.07
- DSS03.01
- DSS05.04
- DSS05.07
- DSS06.02
- MEA02.01
- 3.3.1
- CCI-000162
- CCI-000163
- CCI-000164
- CCI-001314
- 4.2.3.10
- 4.3.3.3.9
- 4.3.3.5.8
- 4.3.3.7.3
- 4.3.4.4.7
- 4.3.4.5.6
- 4.3.4.5.7
- 4.3.4.5.8
- 4.4.2.1
- 4.4.2.2
- 4.4.2.4
- SR 2.1
- SR 2.10
- SR 2.11
- SR 2.12
- SR 2.8
- SR 2.9
- SR 5.2
- SR 6.1
- A.10.1.1
- A.11.1.4
- A.11.1.5
- A.11.2.1
- A.12.4.1
- A.12.4.2
- A.12.4.3
- A.12.4.4
- A.12.7.1
- A.13.1.1
- A.13.1.3
- A.13.2.1
- A.13.2.3
- A.13.2.4
- A.14.1.2
- A.14.1.3
- A.16.1.4
- A.16.1.5
- A.16.1.7
- A.6.1.2
- A.7.1.1
- A.7.1.2
- A.7.3.1
- A.8.2.2
- A.8.2.3
- A.9.1.1
- A.9.1.2
- A.9.2.3
- A.9.4.1
- A.9.4.4
- A.9.4.5
- CIP-003-8 R5.1.1
- CIP-003-8 R5.3
- CIP-004-6 R2.3
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.1
- CIP-007-3 R5.1.2
- CM-6(a)
- AC-6(1)
- AU-9(4)
- DE.AE-3
- DE.AE-5
- PR.AC-4
- PR.DS-5
- PR.PT-1
- RS.AN-1
- RS.AN-4
- Req-10.5.2
- SRG-OS-000057-GPOS-00027
- SRG-OS-000058-GPOS-00028
- SRG-OS-000059-GPOS-00029
- SRG-OS-000206-GPOS-00084
- Unauthorized disclosure of audit records can reveal system and configuration data to
-attackers, thus compromising its confidentiality.
-
- CCE-90635-4
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: imagefs.available
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the imagefs.available setting of the evictionSoft
+section.
+
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- OpenShift Audit Logs Must Be Owned By Root
- All audit logs must be owned by root user and group. By default, the path for the OpenShift audit log is /var/log/openshift-apiserver/.
-
-To properly set the owner of /var/log/openshift-apiserver, run the command:
-$ sudo chown root /var/log/openshift-apiserver
-
-To properly set the owner of /var/log/openshift-apiserver/*, run the command:
-$ sudo chown root /var/log/openshift-apiserver/*
- 1
- 11
- 12
- 13
- 14
- 15
- 16
- 18
- 19
- 3
- 4
- 5
- 6
- 7
- 8
- 5.4.1.1
- APO01.06
- APO11.04
- APO12.06
- BAI03.05
- BAI08.02
- DSS02.02
- DSS02.04
- DSS02.07
- DSS03.01
- DSS05.04
- DSS05.07
- DSS06.02
- MEA02.01
- 3.3.1
- CCI-000162
- CCI-000163
- CCI-000164
- CCI-001314
- 4.2.3.10
- 4.3.3.3.9
- 4.3.3.5.8
- 4.3.3.7.3
- 4.3.4.4.7
- 4.3.4.5.6
- 4.3.4.5.7
- 4.3.4.5.8
- 4.4.2.1
- 4.4.2.2
- 4.4.2.4
- SR 2.1
- SR 2.10
- SR 2.11
- SR 2.12
- SR 2.8
- SR 2.9
- SR 5.2
- SR 6.1
- A.10.1.1
- A.11.1.4
- A.11.1.5
- A.11.2.1
- A.12.4.1
- A.12.4.2
- A.12.4.3
- A.12.4.4
- A.12.7.1
- A.13.1.1
- A.13.1.3
- A.13.2.1
- A.13.2.3
- A.13.2.4
- A.14.1.2
- A.14.1.3
- A.16.1.4
- A.16.1.5
- A.16.1.7
- A.6.1.2
- A.7.1.1
- A.7.1.2
- A.7.3.1
- A.8.2.2
- A.8.2.3
- A.9.1.1
- A.9.1.2
- A.9.2.3
- A.9.4.1
- A.9.4.4
- A.9.4.5
- CIP-003-8 R5.1.1
- CIP-003-8 R5.3
- CIP-004-6 R2.3
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.1
- CIP-007-3 R5.1.2
- CM-6(a)
- AC-6(1)
- AU-9(4)
- DE.AE-3
- DE.AE-5
- PR.AC-4
- PR.DS-5
- PR.PT-1
- RS.AN-1
- RS.AN-4
- Req-10.5.2
- SRG-OS-000057-GPOS-00027
- SRG-OS-000058-GPOS-00028
- SRG-OS-000059-GPOS-00029
- SRG-OS-000206-GPOS-00084
- Unauthorized disclosure of audit records can reveal system and configuration data to
-attackers, thus compromising its confidentiality.
-
- CCE-90636-2
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: imagefs.inodesFree
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the imagefs.inodesFree setting of the evictionSoft
+section.
+
+
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+ CCE-84132-0
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionSoft:
+ imagefs.inodesFree: {{.var_kubelet_evictionsoft_imagefs_inodesfree}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionSoftGracePeriod:
+ imagefs.inodesFree: "1m30s"
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionSoft:
+ imagefs.inodesFree: {{.var_kubelet_evictionsoft_imagefs_inodesfree}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionSoftGracePeriod:
+ imagefs.inodesFree: "1m30s"
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+
-
+
+
+
+
+
-
+
-
- Kubernetes Audit Logs Must Have Mode 0600
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: imagefs.inodesFree
-To properly set the permissions of /var/log/kube-apiserver/.*, run the command:
-$ sudo chmod 0600 /var/log/kube-apiserver/.*
- 1
- 11
- 12
- 13
- 14
- 15
- 16
- 18
- 19
- 3
- 4
- 5
- 6
- 7
- 8
- 5.4.1.1
- APO01.06
- APO11.04
- APO12.06
- BAI03.05
- BAI08.02
- DSS02.02
- DSS02.04
- DSS02.07
- DSS03.01
- DSS05.04
- DSS05.07
- DSS06.02
- MEA02.01
- 3.3.1
- 4.2.3.10
- 4.3.3.3.9
- 4.3.3.5.8
- 4.3.3.7.3
- 4.3.4.4.7
- 4.3.4.5.6
- 4.3.4.5.7
- 4.3.4.5.8
- 4.4.2.1
- 4.4.2.2
- 4.4.2.4
- SR 2.1
- SR 2.10
- SR 2.11
- SR 2.12
- SR 2.8
- SR 2.9
- SR 5.2
- SR 6.1
- A.10.1.1
- A.11.1.4
- A.11.1.5
- A.11.2.1
- A.12.4.1
- A.12.4.2
- A.12.4.3
- A.12.4.4
- A.12.7.1
- A.13.1.1
- A.13.1.3
- A.13.2.1
- A.13.2.3
- A.13.2.4
- A.14.1.2
- A.14.1.3
- A.16.1.4
- A.16.1.5
- A.16.1.7
- A.6.1.2
- A.7.1.1
- A.7.1.2
- A.7.3.1
- A.8.2.2
- A.8.2.3
- A.9.1.1
- A.9.1.2
- A.9.2.3
- A.9.4.1
- A.9.4.4
- A.9.4.5
- CIP-003-8 R5.1.1
- CIP-003-8 R5.3
- CIP-004-6 R2.3
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.1
- CIP-007-3 R5.1.2
- CM-6(a)
- AC-6(1)
- AU-9(4)
- DE.AE-3
- DE.AE-5
- PR.AC-4
- PR.DS-5
- PR.PT-1
- RS.AN-1
- RS.AN-4
- Req-10.5.2
- If users can write to audit logs, audit trails can be modified or destroyed.
-
- CCE-83654-4
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the imagefs.inodesFree setting of the evictionSoft
+section.
+
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+
-
+
-
+
-
- OAuth Audit Logs Must Have Mode 0600
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: imagefs.inodesFree
-To properly set the permissions of /var/log/oauth-apiserver/.*, run the command:
-$ sudo chmod 0600 /var/log/oauth-apiserver/.*
- 1
- 11
- 12
- 13
- 14
- 15
- 16
- 18
- 19
- 3
- 4
- 5
- 6
- 7
- 8
- 5.4.1.1
- APO01.06
- APO11.04
- APO12.06
- BAI03.05
- BAI08.02
- DSS02.02
- DSS02.04
- DSS02.07
- DSS03.01
- DSS05.04
- DSS05.07
- DSS06.02
- MEA02.01
- 3.3.1
- 4.2.3.10
- 4.3.3.3.9
- 4.3.3.5.8
- 4.3.3.7.3
- 4.3.4.4.7
- 4.3.4.5.6
- 4.3.4.5.7
- 4.3.4.5.8
- 4.4.2.1
- 4.4.2.2
- 4.4.2.4
- SR 2.1
- SR 2.10
- SR 2.11
- SR 2.12
- SR 2.8
- SR 2.9
- SR 5.2
- SR 6.1
- A.10.1.1
- A.11.1.4
- A.11.1.5
- A.11.2.1
- A.12.4.1
- A.12.4.2
- A.12.4.3
- A.12.4.4
- A.12.7.1
- A.13.1.1
- A.13.1.3
- A.13.2.1
- A.13.2.3
- A.13.2.4
- A.14.1.2
- A.14.1.3
- A.16.1.4
- A.16.1.5
- A.16.1.7
- A.6.1.2
- A.7.1.1
- A.7.1.2
- A.7.3.1
- A.8.2.2
- A.8.2.3
- A.9.1.1
- A.9.1.2
- A.9.2.3
- A.9.4.1
- A.9.4.4
- A.9.4.5
- CIP-003-8 R5.1.1
- CIP-003-8 R5.3
- CIP-004-6 R2.3
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.1
- CIP-007-3 R5.1.2
- CM-6(a)
- AC-6(1)
- AU-9(4)
- DE.AE-3
- DE.AE-5
- PR.AC-4
- PR.DS-5
- PR.PT-1
- RS.AN-1
- RS.AN-4
- Req-10.5.2
- If users can write to audit logs, audit trails can be modified or destroyed.
-
- CCE-90637-0
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the imagefs.inodesFree setting of the evictionSoft
+section.
+
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- OpenShift Audit Logs Must Have Mode 0600
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: imagefs.inodesFree
-To properly set the permissions of /var/log/openshift-apiserver/.*, run the command:
-$ sudo chmod 0600 /var/log/openshift-apiserver/.*
- 1
- 11
- 12
- 13
- 14
- 15
- 16
- 18
- 19
- 3
- 4
- 5
- 6
- 7
- 8
- 5.4.1.1
- APO01.06
- APO11.04
- APO12.06
- BAI03.05
- BAI08.02
- DSS02.02
- DSS02.04
- DSS02.07
- DSS03.01
- DSS05.04
- DSS05.07
- DSS06.02
- MEA02.01
- 3.3.1
- 4.2.3.10
- 4.3.3.3.9
- 4.3.3.5.8
- 4.3.3.7.3
- 4.3.4.4.7
- 4.3.4.5.6
- 4.3.4.5.7
- 4.3.4.5.8
- 4.4.2.1
- 4.4.2.2
- 4.4.2.4
- SR 2.1
- SR 2.10
- SR 2.11
- SR 2.12
- SR 2.8
- SR 2.9
- SR 5.2
- SR 6.1
- A.10.1.1
- A.11.1.4
- A.11.1.5
- A.11.2.1
- A.12.4.1
- A.12.4.2
- A.12.4.3
- A.12.4.4
- A.12.7.1
- A.13.1.1
- A.13.1.3
- A.13.2.1
- A.13.2.3
- A.13.2.4
- A.14.1.2
- A.14.1.3
- A.16.1.4
- A.16.1.5
- A.16.1.7
- A.6.1.2
- A.7.1.1
- A.7.1.2
- A.7.3.1
- A.8.2.2
- A.8.2.3
- A.9.1.1
- A.9.1.2
- A.9.2.3
- A.9.4.1
- A.9.4.4
- A.9.4.5
- CIP-003-8 R5.1.1
- CIP-003-8 R5.3
- CIP-004-6 R2.3
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.1
- CIP-007-3 R5.1.2
- CM-6(a)
- AC-6(1)
- AU-9(4)
- DE.AE-3
- DE.AE-5
- PR.AC-4
- PR.DS-5
- PR.PT-1
- RS.AN-1
- RS.AN-4
- Req-10.5.2
- If users can write to audit logs, audit trails can be modified or destroyed.
-
- CCE-90638-8
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the imagefs.inodesFree setting of the evictionSoft
+section.
+
+
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- Ensure /var/log/kube-apiserver Located On Separate Partition
- Kubernetes API server audit logs are stored in the
-/var/log/kube-apiserver directory.
-
-Partitioning Red Hat CoreOS is a Day 1 operation and cannot
-be changed afterwards. For documentation on how to add a
-MachineConfig manifest that specifies a separate /var/log/kube-apiserver
-partition, follow:
-
- https://docs.openshift.com/container-platform/latest/installing/installing_platform_agnostic/installing-platform-agnostic.html#installation-user-infra-machines-advanced_disk_installing-platform-agnostic
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: memory.available
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
-
-Note that the Red Hat OpenShift documentation often references a block
-device, such as /dev/vda. The name of the available block devices depends
-on the underlying infrastructure (bare metal vs cloud), and often the specific
-instance type. For example in AWS, some instance types have NVMe drives
-(/dev/nvme*), others use /dev/xvda*.
-
-You will need to look for relevant documentation for your infrastructure around this.
-In many cases, the simplest thing is to boot a single machine with an Ignition
-configuration that just gives you SSH access, and inspect the block devices via
-e.g. the lsblk command.
-
-For physical hardware, a good best practice is to reference devices via the
-/dev/disk/by-id/ or /dev/disk/by-path links.
-
- AU-4
- Req-10.5.3
- Req-10.5.4
- SRG-APP-000357-CTR-000800
- Placing /var/log/kube-apiserver in its own partition
-enables better separation between Kubernetes API server audit
-files and other log files, and helps ensure that
-auditing cannot be halted due to the partition running out
-of space.
-
- CCE-86456-1
-
-
-
-
-
- Ensure /var/log/oauth-apiserver Located On Separate Partition
- OpenShift OAuth server audit logs are stored in the
-/var/log/oauth-apiserver directory.
-
-Partitioning Red Hat CoreOS is a Day 1 operation and cannot
-be changed afterwards. For documentation on how to add a
-MachineConfig manifest that specifies a separate /var/log/oauth-apiserver
-partition, follow:
-
- https://docs.openshift.com/container-platform/latest/installing/installing_platform_agnostic/installing-platform-agnostic.html#installation-user-infra-machines-advanced_disk_installing-platform-agnostic
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
-
-Note that the Red Hat OpenShift documentation often references a block
-device, such as /dev/vda. The name of the available block devices depends
-on the underlying infrastructure (bare metal vs cloud), and often the specific
-instance type. For example in AWS, some instance types have NVMe drives
-(/dev/nvme*), others use /dev/xvda*.
-
-You will need to look for relevant documentation for your infrastructure around this.
-In many cases, the simplest thing is to boot a single machine with an Ignition
-configuration that just gives you SSH access, and inspect the block devices via
-e.g. the lsblk command.
-
-For physical hardware, a good best practice is to reference devices via the
-/dev/disk/by-id/ or /dev/disk/by-path links.
-
- AU-4
- Req-10.5.3
- Req-10.5.4
- SRG-APP-000357-CTR-000800
- Placing /var/log/oauth-apiserver in its own partition
-enables better separation between OpenShift OAuth server audit
-files and other log files, and helps ensure that
-auditing cannot be halted due to the partition running out
-of space.
-
- CCE-85954-6
-
-
-
-
-
- Ensure /var/log/openshift-apiserver Located On Separate Partition
- Openshift API server audit logs are stored in the
-/var/log/openshift-apiserver directory.
-
-Partitioning Red Hat CoreOS is a Day 1 operation and cannot
-be changed afterwards. For documentation on how to add a
-MachineConfig manifest that specifies a separate /var/log/openshift-apiserver
-partition, follow:
-
- https://docs.openshift.com/container-platform/latest/installing/installing_platform_agnostic/installing-platform-agnostic.html#installation-user-infra-machines-advanced_disk_installing-platform-agnostic
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
-
-Note that the Red Hat OpenShift documentation often references a block
-device, such as /dev/vda. The name of the available block devices depends
-on the underlying infrastructure (bare metal vs cloud), and often the specific
-instance type. For example in AWS, some instance types have NVMe drives
-(/dev/nvme*), others use /dev/xvda*.
-
-You will need to look for relevant documentation for your infrastructure around this.
-In many cases, the simplest thing is to boot a single machine with an Ignition
-configuration that just gives you SSH access, and inspect the block devices via
-e.g. the lsblk command.
-
-For physical hardware, a good best practice is to reference devices via the
-/dev/disk/by-id/ or /dev/disk/by-path links.
-
- AU-4
- Req-10.5.3
- Req-10.5.4
- SRG-APP-000357-CTR-000800
- Placing /var/log/openshift-apiserver in its own partition
-enables better separation between Openshift API server audit
-files and other log files, and helps ensure that
-auditing cannot be halted due to the partition running out
-of space.
-
- CCE-86094-0
-
-
-
-
-
-
- OpenShift - Master Node Settings
- Contains evaluations for the master node configuration settings.
-
- Verify Group Who Owns The OpenShift Container Network Interface Files
- To properly set the group owner of /etc/cni/net.d/*, run the command: $ sudo chgrp root /etc/cni/net.d/*
+
+This rule pertains to the memory.available setting of the evictionSoft
+section.
+
+
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11402,28 +11937,115 @@ of space.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.10
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-84025-6
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+ CCE-84222-9
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionSoft:
+ memory.available: {{.var_kubelet_evictionsoft_memory_available}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionSoftGracePeriod:
+ memory.available: "1m30s"
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionSoft:
+ memory.available: {{.var_kubelet_evictionsoft_memory_available}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionSoftGracePeriod:
+ memory.available: "1m30s"
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+
-
+
+
+
+
+
-
+
-
- Verify Group Who Owns The OpenShift Controller Manager Kubeconfig File
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: memory.available
-To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig, run the command:
-$ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig
- This rule is only applicable for nodes that run the Kubernetes Controller
-Manager service. The aforementioned service is only running on
-the nodes labeled "master" by default.
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the memory.available setting of the evictionSoft
+section.
+
+
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11432,27 +12054,52 @@ the nodes labeled "master" by default.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.18
- The Controller Manager's kubeconfig contains information about how the
-component will access the API server. You should set its file ownership to
-maintain the integrity of the file.
-
- CCE-84095-9
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+
-
+
-
+
-
- Verify Group Who Owns The Etcd Database Directory
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: memory.available
-To properly set the group owner of /var/lib/etcd/member/, run the command:
-$ sudo chgrp root /var/lib/etcd/member/
- This rule is only applicable for nodes that run the Etcd service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the memory.available setting of the evictionSoft
+section.
+
+
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11461,27 +12108,54 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.12
- etcd is a highly-available key-value store used by Kubernetes deployments for
-persistent storage of all of its REST API objects. This data directory should
-be protected from any unauthorized reads or writes.
-
- CCE-83354-1
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- Verify Group Who Owns The Etcd Write-Ahead-Log Files
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: memory.available
-To properly set the group owner of /var/lib/etcd/member/wal/*, run the command:
-$ sudo chgrp root /var/lib/etcd/member/wal/*
- This rule is only applicable for nodes that run the Etcd service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the memory.available setting of the evictionSoft
+section.
+
+
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11490,25 +12164,56 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.12
- etcd is a highly-available key-value store used by Kubernetes deployments for
-persistent storage of all of its REST API objects. This data directory should
-be protected from any unauthorized reads or writes.
-
- CCE-83816-9
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- Verify Group Who Owns The etcd Member Pod Specification File
- To properly set the group owner of /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml, run the command: $ sudo chgrp root /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml
- This rule is only applicable for nodes that run the Etcd service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: nodefs.available
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.available setting of the evictionSoft
+section.
+
+
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11517,30 +12222,115 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.8
- The etcd pod specification file controls various parameters that
-set the behavior of the etcd service in the master node. etcd is a
-highly-available key-value store which Kubernetes uses for persistent
-storage of all of its REST API object. You should restrict its file
-permissions to maintain the integrity of the file. The file should be
-writable by only the administrators on the system.
-
- CCE-83664-3
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+ CCE-84119-7
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionSoft:
+ nodefs.available: {{.var_kubelet_evictionsoft_nodefs_available}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionSoftGracePeriod:
+ nodefs.available: "1m30s"
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionSoft:
+ nodefs.available: {{.var_kubelet_evictionsoft_nodefs_available}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionSoftGracePeriod:
+ nodefs.available: "1m30s"
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+
-
+
+
+
+
+
-
+
-
- Verify Group Who Owns The Etcd PKI Certificate Files
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: nodefs.available
-To properly set the group owner of /etc/kubernetes/static-pod-resources/*/*/*/*.crt, run the command:
-$ sudo chgrp root /etc/kubernetes/static-pod-resources/*/*/*/*.crt
- This rule is only applicable for nodes that run the Etcd service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.available setting of the evictionSoft
+section.
+
+
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11549,23 +12339,52 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.19
- OpenShift makes use of a number of certificates as part of its operation.
-You should verify the ownership of the directory containing the PKI
-information and all files in that directory to maintain their integrity.
-The directory and files should be owned by the system administrator.
-
- CCE-83890-4
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+
-
+
-
+
-
- Verify Group Who Owns The OpenShift SDN Container Network Interface Plugin IP Address Allocations
- To properly set the group owner of /var/lib/cni/networks/openshift-sdn/.*, run the command: $ sudo chgrp root /var/lib/cni/networks/openshift-sdn/.*
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: nodefs.available
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.available setting of the evictionSoft
+section.
+
+
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11574,26 +12393,54 @@ The directory and files should be owned by the system administrator.SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.10
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-84211-2
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- Verify Group Who Owns The Kubernetes API Server Pod Specification File
- To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml, run the command: $ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml
- This rule is only applicable for nodes that run the Kubernetes API Server service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: nodefs.available
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.available setting of the evictionSoft
+section.
+
+
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11602,25 +12449,56 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.2
- The Kubernetes specification file contains information about the configuration of the
-Kubernetes API Server that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
- CCE-83530-6
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- Verify Group Who Owns The Kubernetes Controller Manager Pod Specification File
- To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml, run the command: $ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml
- This rule is only applicable for nodes that run the Kubernetes Controller Manager service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: nodefs.inodesFree
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.inodesFree setting of the evictionSoft
+section.
+
+
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11629,25 +12507,115 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.4
- The Kubernetes specification file contains information about the configuration of the
-Kubernetes Controller Manager Server that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
- CCE-83953-0
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+ CCE-84123-9
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionSoft:
+ nodefs.inodesFree: {{.var_kubelet_evictionsoft_nodefs_inodesfree}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionSoftGracePeriod:
+ nodefs.inodesFree: "1m30s"
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionSoft:
+ nodefs.inodesFree: {{.var_kubelet_evictionsoft_nodefs_inodesfree}}
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionSoftGracePeriod:
+ nodefs.inodesFree: "1m30s"
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ evictionPressureTransitionPeriod: 0s
+
-
+
+
+
+
+
-
+
-
- Verify Group Who Owns The Kubernetes Scheduler Pod Specification File
- To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml, run the command: $ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml
- This rule is only applicable for nodes that run the Kubernetes Scheduler service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: nodefs.inodesFree
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.inodesFree setting of the evictionSoft
+section.
+
+
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11656,38 +12624,52 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.6
- The Kubernetes Specification file contains information about the configuration of the
-Kubernetes scheduler that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
- CCE-83614-8
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
+
-
-
-
-
+
-
-
- Verify Group Who Owns The OpenShift Admin Kubeconfig File
- To properly set the group owner of /etc/kubernetes/kubeconfig, run the command: $ sudo chgrp root /etc/kubernetes/kubeconfig
- 1.1.14
- The /etc/kubernetes/kubeconfig file contains information about the administrative configuration of the
-OpenShift cluster that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
+
-
- Verify Group Who Owns The OpenShift Admin Kubeconfig Files
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: nodefs.inodesFree
-To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig, run the command:
-$ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig
- This rule is only applicable for nodes that run the Kubernetes API server service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.inodesFree setting of the evictionSoft
+section.
+
+
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11696,26 +12678,54 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.14
- There are various kubeconfig files that can be used by the administrator,
-defining various settings for the administration of the cluster. These files
-contain credentials that can be used to control the cluster and are needed
-for disaster recovery and each kubeconfig points to a different endpoint in
-the cluster. You should restrict its file permissions to maintain the
-integrity of the kubeconfig file as an attacker who gains access to these
-files can take over the cluster.
-
- CCE-84204-7
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- Verify Group Who Owns The OpenShift Multus Container Network Interface Plugin Files
- To properly set the group owner of /var/run/multus/cni/net.d/*, run the command: $ sudo chgrp root /var/run/multus/cni/net.d/*
+
+ Ensure Eviction threshold Settings Are Set - evictionSoft: nodefs.inodesFree
+
+ Two types of garbage collection are performed on an OpenShift Container Platform node:
+
+ Container garbage collection: Removes terminated containers.
+ Image garbage collection: Removes images not referenced by any running pods.
+
+
+Container garbage collection can be performed using eviction thresholds.
+Image garbage collection relies on disk usage as reported by cAdvisor on the
+node to decide which images to remove from the node.
+
+
+The OpenShift administrator can configure how OpenShift Container Platform
+performs garbage collection by creating a kubeletConfig object for each
+Machine Config Pool using any combination of the following:
+
+
+ soft eviction for containers
+ hard eviction for containers
+ eviction for images
+
+
+To configure, follow the directions in
+the documentation
+
+
+This rule pertains to the nodefs.inodesFree setting of the evictionSoft
+section.
+
+
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11724,154 +12734,135 @@ files can take over the cluster.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.10
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83818-5
+ 1.3.1
+ Garbage collection is important to ensure sufficient resource availability
+and avoiding degraded performance and availability. In the worst case, the
+system might crash or just be unusable for a long period of time.
+Based on your system resources and tests, choose an appropriate threshold
+value to activate garbage collection.
-
+
+
+
+
-
+
-
- Verify Group Who Owns The OpenShift PKI Certificate Files
-
-To properly set the group owner of /etc/kubernetes/static-pod-resources/*/*/*/tls.crt, run the command:
-$ sudo chgrp root /etc/kubernetes/static-pod-resources/*/*/*/tls.crt
- This rule is only applicable for nodes that run the Kubernetes Control Plane.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+
+ kubelet - Ensure that the --read-only-port is secured
+ Disable the read-only port.
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.19
- OpenShift makes use of a number of certificates as part of its operation.
-You should verify the ownership of the directory containing the PKI
-information and all files in that directory to maintain their integrity.
-The directory and files should be owned by the system administrator.
-
- CCE-83922-5
+ The Kubelet process provides a read-only API in addition to the main Kubelet API.
+Unauthenticated access is provided to this read-only API which could possibly retrieve
+potentially sensitive information about the cluster.
+
-
+
+
+
+
-
+
-
- Verify Group Who Owns The OpenShift PKI Private Key Files
-
-To properly set the group owner of /etc/kubernetes/static-pod-resources/*/*/*/*.key, run the command:
-$ sudo chgrp root /etc/kubernetes/static-pod-resources/*/*/*/*.key
- This rule is only applicable for nodes that run the Kubernetes Control Plane.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+
+ kubelet - Ensure that the --read-only-port is secured
+ Disable the read-only port.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.19
- OpenShift makes use of a number of certificates as part of its operation.
-You should verify the ownership of the directory containing the PKI
-information and all files in that directory to maintain their integrity.
-The directory and files should be owned by root:root.
-
- CCE-84172-6
+ The Kubelet process provides a read-only API in addition to the main Kubelet API.
+Unauthenticated access is provided to this read-only API which could possibly retrieve
+potentially sensitive information about the cluster.
+
-
+
-
+
-
- Verify Group Who Owns The OpenShift SDN CNI Server Config
-
-To properly set the group owner of /var/run/openshift-sdn/cniserver/config.json, run the command:
-$ sudo chgrp root /var/run/openshift-sdn/cniserver/config.json
+
+ kubelet - Ensure that the --read-only-port is secured
+ Disable the read-only port.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83605-6
-
-
-
-
-
-
-
-
- Verify Group Who Owns The OpenShift Open vSwitch Files
- To properly set the group owner of /etc/openvswitch/.*, run the command: $ sudo chgrp root /etc/openvswitch/.*
- 1.1.10
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
+ The Kubelet process provides a read-only API in addition to the main Kubelet API.
+Unauthenticated access is provided to this read-only API which could possibly retrieve
+potentially sensitive information about the cluster.
+
-
+
+
+
-
+
-
- Verify Group Who Owns The Open vSwitch Configuration Database
- Check if the group owner of /etc/openvswitch/conf.db is
-hugetlbfs on architectures other than s390x or openvswitch
-on s390x.
+
+ kubelet - Ensure that the --read-only-port is secured
+ Disable the read-only port.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-88281-1
+ The Kubelet process provides a read-only API in addition to the main Kubelet API.
+Unauthenticated access is provided to this read-only API which could possibly retrieve
+potentially sensitive information about the cluster.
+
-
+
+
+
-
+
-
- Verify Group Who Owns The Open vSwitch Configuration Database Lock
- Check if the group owner of /etc/openvswitch/conf.db.~lock~ is
-hugetlbfs on architectures other than s390x or openvswitch
-on s390x.
+
+ Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers
+ Ensure that the Kubelet is configured to only use strong cryptographic ciphers.
+To set the cipher suites for the kubelet, create new or modify existing
+KubeletConfig object along these lines, one for every
+MachineConfigPool:
+
+ apiVersion: machineconfiguration.openshift.io/v1
+ kind: KubeletConfig
+ metadata:
+ name: kubelet-config-$pool
+ spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ tlsCipherSuites:
+ - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
+ - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
+
+In order to configure this rule to check for an alternative cipher, both var_kubelet_tls_cipher_suites_regex
+and var_kubelet_tls_cipher_suites have to be set
+ This rule's check operates on the cluster configuration dump. This will be a Platform rule, var_role_worker and var_role_master needed to be set if scan is not expected to run on master, and worker nodes.
+Therefore, you need to use a tool that can query the OCP API, retrieve KubeletConfig through "/api/v1/nodes/NODE_NAME/proxy/configz" API endpoint to the local "/kubeletconfig/role/role" file.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11880,25 +12871,68 @@ on s390x.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-90793-1
+ 4.2.13
+ TLS ciphers have had a number of known vulnerabilities and weaknesses,
+which can reduce the protection provided by them. By default Kubernetes
+supports a number of TLS ciphersuites including some that have security
+concerns, weakening the protection provided.
+
+ ---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_worker}}"
+spec:
+ kubeletConfig:
+ tlsCipherSuites: [{{.var_kubelet_tls_cipher_suites}}]
+---
+apiVersion: machineconfiguration.openshift.io/v1
+kind: KubeletConfig
+metadata:
+ annotations:
+ complianceascode.io/node-role: "{{.var_role_master}}"
+spec:
+ kubeletConfig:
+ tlsCipherSuites: [{{.var_kubelet_tls_cipher_suites}}]
+
-
+
+
+
+
+
-
+
-
- Verify Group Who Owns The Open vSwitch Configuration Database Lock
-
-To properly set the group owner of /etc/openvswitch/.conf.db.~lock~, run the command:
-$ sudo chgrp hugetlbfs /etc/openvswitch/.conf.db.~lock~
+
+ Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers
+ Ensure that the Kubelet is configured to only use strong cryptographic ciphers.
+To set the cipher suites for the kubelet, create new or modify existing
+KubeletConfig object along these lines, one for every
+MachineConfigPool:
+
+ apiVersion: machineconfiguration.openshift.io/v1
+ kind: KubeletConfig
+ metadata:
+ name: kubelet-config-$pool
+ spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ tlsCipherSuites:
+ - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
+ - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
+
+In order to configure this rule to check for an alternative cipher, both var_kubelet_tls_cipher_suites_regex
+and var_kubelet_tls_cipher_suites have to be set
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11907,25 +12941,48 @@ To properly set the group owner of /etc/openvswitch/.conf.db.~lock~
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-84219-5
+ 4.2.13
+ TLS ciphers have had a number of known vulnerabilities and weaknesses,
+which can reduce the protection provided by them. By default Kubernetes
+supports a number of TLS ciphersuites including some that have security
+concerns, weakening the protection provided.
+
-
+
+
+
+
-
+
-
- Verify Group Who Owns The Open vSwitch Configuration Database Lock
-
-To properly set the group owner of /etc/openvswitch/.conf.db.~lock~, run the command:
-$ sudo chgrp hugetlbfs /etc/openvswitch/.conf.db.~lock~
+
+ Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers
+ Ensure that the Kubelet is configured to only use strong cryptographic ciphers.
+To set the cipher suites for the kubelet, create new or modify existing
+KubeletConfig object along these lines, one for every
+MachineConfigPool:
+
+ apiVersion: machineconfiguration.openshift.io/v1
+ kind: KubeletConfig
+ metadata:
+ name: kubelet-config-$pool
+ spec:
+ machineConfigPoolSelector:
+ matchLabels:
+ pools.operator.machineconfiguration.openshift.io/$pool_name: ""
+ kubeletConfig:
+ tlsCipherSuites:
+ - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+ - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
+ - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
+
+In order to configure this rule to check for an alternative cipher, both var_kubelet_tls_cipher_suites_regex
+and var_kubelet_tls_cipher_suites have to be set
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -11934,745 +12991,1713 @@ To properly set the group owner of /etc/openvswitch/.conf.db.~lock~
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-85936-3
+ 4.2.13
+ TLS ciphers have had a number of known vulnerabilities and weaknesses,
+which can reduce the protection provided by them. By default Kubernetes
+supports a number of TLS ciphersuites including some that have security
+concerns, weakening the protection provided.
+
-
+
+
+
+
-
+
-
- Verify Group Who Owns The Open vSwitch Configuration Database
-
-To properly set the group owner of /etc/openvswitch/conf.db, run the command:
-$ sudo chgrp hugetlbfs /etc/openvswitch/conf.db
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-84226-0
+
+
+ OpenShift - Logging Settings
+ Contains evaluations for the cluster's logging configuration settings.
+
+ Configure the OpenShift Audit Profile
+ Audit log profiles define how to log requests that come to the OpenShift
+API server, the Kubernetes API server, and the OAuth API server.
+ Default
+ Default
+ WriteRequestBodies
+ AllRequestBodies
+
+
+ Ensure that Audit Log Errors Emit Alerts
+
+OpenShift audit works at the API server level, logging all requests coming to the server.
+However, if API server instance is unable to write errors, an alert must be issued
+in order for the organization to take a relevant action. e.g. shutting down that instance.
+
+Kubernetes by default has metrics that enable one to write such alerts:
+apiserver_audit_event_totalapiserver_audit_error_total
+
+Such an example is shipped in OCP 4.9+
+
+
+apiVersion: monitoring.coreos.com/v1
+kind: PrometheusRule
+metadata:
+ name: audit-errors
+ namespace: openshift-kube-apiserver
+spec:
+ groups:
+ - name: apiserver-audit
+ rules:
+ - alert: AuditLogError
+ annotations:
+ summary: |-
+ An API Server instance was unable to write audit logs. This could be
+ triggered by the node running out of space, or a malicious actor
+ tampering with the audit logs.
+ description: An API Server had an error writing to an audit log.
+ expr: |
+ sum by (apiserver,instance)(rate(apiserver_audit_error_total{apiserver=~".+-apiserver"}[5m])) / sum by (apiserver,instance) (rate(apiserver_audit_event_total{apiserver=~".+-apiserver"}[5m])) > 0
+ for: 1m
+ labels:
+ severity: warning
+
+
+
+For more information, consult the
+official Kubernetes documentation.
+
+ This rule's check operates on the cluster configuration dump.
+Therefore, you need to use a tool that can query the OCP API, retrieve the following:
+/apis/monitoring.coreos.com/v1/prometheusrules?limit=500
+ API endpoint, filter with with the jq utility using the following filter
+ [.items[].spec.groups[].rules[].expr]
+ and persist it to the local
+ /apis/monitoring.coreos.com/v1/prometheusrules?limit=500#72e9ad360bb6bdf4ad9e43217cd0ec9cb90e7c3b08d4fbe0edf087ad899e05a6
+ file.
+
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.3
+ CIP-004-6 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.1.2
+ AU-5
+ SRG-APP-000109-CTR-000215
+ When there are errors writing audit logs, security events will not be logged
+by that specific API Server instance. Security Incident Response teams use
+these audit logs, amongst other artifacts, to determine the impact of
+security breaches or events. Without these logs, it becomes very difficult
+to assess a situation and do appropriate root cause analysis in such incidents.
+ CCE-90744-4
+ ---
+apiVersion: monitoring.coreos.com/v1
+kind: PrometheusRule
+metadata:
+ name: audit-errors
+ namespace: openshift-kube-apiserver
+spec:
+ groups:
+ - name: apiserver-audit
+ rules:
+ - alert: AuditLogError
+ annotations:
+ summary: |-
+ An API Server instance was unable to write audit logs. This could be
+ triggered by the node running out of space, or a malicious actor
+ tampering with the audit logs.
+ description: An API Server had an error writing to an audit log.
+ expr: |
+ sum by (apiserver,instance)(rate(apiserver_audit_error_total{apiserver=~".+-apiserver"}[5m])) / sum by (apiserver,instance) (rate(apiserver_audit_event_total{apiserver=~".+-apiserver"}[5m])) > 0
+ for: 1m
+ labels:
+ severity: warning
+
-
+
+
-
+
-
- Verify Group Who Owns The Open vSwitch Configuration Database
+
+ Ensure that Audit Log Forwarding Uses TLS
-To properly set the group owner of /etc/openvswitch/conf.db, run the command:
-$ sudo chgrp openvswitch /etc/openvswitch/conf.db
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-85927-2
-
-
-
-
-
-
-
-
- Verify Group Who Owns The Open vSwitch Process ID File
- Ensure that the file /var/run/openvswitch/ovs-vswitchd.pid,
-is owned by the group openvswitch or hugetlbfs,
-depending on your settings and Open vSwitch version.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83630-4
+
+OpenShift audit works at the API server level, logging all requests coming to the server.
+Audit is on by default and the best practice is to ship audit logs off the cluster for retention
+using a secure protocol.
+
+
+The cluster-logging-operator is able to do this with the ClusterLogForwarders resource.
+The forementioned resource can be configured to logs to different third party systems.
+For more information on this, please reference the official documentation:
+
+ https://docs.openshift.com/container-platform/latest/logging/cluster-logging-external.html
+
+
+ This rule's check operates on the cluster configuration dump.
+Therefore, you need to use a tool that can query the OCP API, retrieve the .
+This rule's check operates on the cluster configuration dump.
+Therefore, you need to use a tool that can query the OCP API, retrieve the following:
+/apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance
+ API endpoint, filter with with the jq utility using the following filter
+ try [.spec.outputs[].url] catch []
+ and persist it to the local
+ /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance#71786452ba18c51ba8ad51472a078619e2e8b52a86cd75087af5aab42400f6c0
+ file.
+
+ CIP-003-8 R5.2
+ CIP-004-6 R3.3
+ CIP-007-3 R6.5
+ AU-9
+ AU-9(2)
+ AU-9(3)
+ AU-10
+ SRG-APP-000118-CTR-000240
+ SRG-APP-000119-CTR-000245
+ SRG-APP-000120-CTR-000250
+ SRG-APP-000121-CTR-000255
+ SRG-APP-000122-CTR-000260
+ SRG-APP-000123-CTR-000265
+ SRG-APP-000126-CTR-000275
+ SRG-APP-000290-CTR-000670
+ It is necessary to ensure that any configured output uses the TLS protocol to receive
+logs in order to ensure the confidentiality, integrity and authenticity of the logs.
+ CCE-90688-3
-
+
+
-
+
-
- Verify Group Who Owns The Open vSwitch Persistent System ID
- Check if the group owner of /etc/openvswitch/system-id.conf is
-hugetlbfs on architectures other than s390x or openvswitch
-on x390x.
+
+ Ensure that the cluster's audit profile is properly set
+
+
+OpenShift can audit the details of requests made to the API server through
+the standard Kubernetes audit capabilities.
+
+
+In OpenShift, auditing of the API Server is on by default. Audit provides a
+security-relevant chronological set of records documenting the sequence of
+activities that have affected system by individual users, administrators, or
+other components of the system. Audit works at the API server level, logging
+all requests coming to the server. Each audit log contains two entries:
+
+
+The request line containing:
+
+
+ A Unique ID allowing to match the response line (see #2)
+ The source IP of the request
+ The HTTP method being invoked
+ The original user invoking the operation
+ The impersonated user for the operation (self meaning himself)
+ The impersonated group for the operation (lookup meaning user's group)
+ The namespace of the request or none
+ The URI as requested
+
+
+The response line containing:
+
+
+ The aforementioned unique ID
+ The response code
+
+
+For more information on how to configure the audit profile, please visit
+the documentation
+
+
+ This rule's check operates on the cluster configuration dump.
+Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/apiservers/cluster API endpoint to the local /apis/config.openshift.io/v1/apiservers/cluster file.
+ CIP-003-8 R4
+ CIP-003-8 R4.1
+ CIP-003-8 R4.2
+ CIP-003-8 R5.2
CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-85892-8
+ CIP-004-6 R2.2.2
+ CIP-004-6 R2.2.3
+ CIP-004-6 R3.3
+ CIP-007-3 R.1.3
+ CIP-007-3 R5
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.2
+ CIP-007-3 R5.3.1
+ CIP-007-3 R5.3.2
+ CIP-007-3 R5.3.3
+ CIP-007-3 R6.5
+ AU-2
+ AU-3
+ AU-3(1)
+ AU-6
+ AU-6(1)
+ AU-7
+ AU-7(1)
+ AU-8
+ AU-8(1)
+ AU-9
+ AU-12
+ AU-12(1)
+ CM-5(1)
+ SI-11
+ SI-12
+ SI-4(20)
+ SI-4(23)
+ Req-2.2
+ Req-12.5.5
+ SRG-APP-000089-CTR-000150
+ SRG-APP-000090-CTR-000155
+ SRG-APP-000091-CTR-000160
+ SRG-APP-000095-CTR-000170
+ SRG-APP-000096-CTR-000175
+ SRG-APP-000097-CTR-000180
+ SRG-APP-000098-CTR-000185
+ SRG-APP-000099-CTR-000190
+ SRG-APP-000100-CTR-000195
+ SRG-APP-000100-CTR-000200
+ SRG-APP-000101-CTR-000205
+ SRG-APP-000116-CTR-000235
+ SRG-APP-000118-CTR-000240
+ SRG-APP-000119-CTR-000245
+ SRG-APP-000120-CTR-000250
+ SRG-APP-000121-CTR-000255
+ SRG-APP-000122-CTR-000260
+ SRG-APP-000123-CTR-000265
+ SRG-APP-000181-CTR-000485
+ SRG-APP-000266-CTR-000625
+ SRG-APP-000374-CTR-000865
+ SRG-APP-000375-CTR-000870
+ SRG-APP-000380-CTR-000900
+ SRG-APP-000381-CTR-000905
+ SRG-APP-000492-CTR-001220
+ SRG-APP-000493-CTR-001225
+ SRG-APP-000494-CTR-001230
+ SRG-APP-000495-CTR-001235
+ SRG-APP-000496-CTR-001240
+ SRG-APP-000497-CTR-001245
+ SRG-APP-000498-CTR-001250
+ SRG-APP-000499-CTR-001255
+ SRG-APP-000500-CTR-001260
+ SRG-APP-000501-CTR-001265
+ SRG-APP-000502-CTR-001270
+ SRG-APP-000503-CTR-001275
+ SRG-APP-000504-CTR-001280
+ SRG-APP-000505-CTR-001285
+ SRG-APP-000506-CTR-001290
+ SRG-APP-000507-CTR-001295
+ SRG-APP-000508-CTR-001300
+ SRG-APP-000509-CTR-001305
+ SRG-APP-000510-CTR-001310
+ 3.2.1
+ 3.2.2
+ Logging is an important detective control for all systems, to detect potential
+unauthorised access.
+ CCE-83577-7
+ ---
+apiVersion: config.openshift.io/v1
+kind: APIServer
+metadata:
+ name: cluster
+spec:
+ audit:
+ profile: {{.var_openshift_audit_profile}}
+
-
+
+
+
-
+
-
- Verify Group Who Owns The Open vSwitch Persistent System ID
-
-To properly set the group owner of /etc/openvswitch/system-id.conf, run the command:
-$ sudo chgrp hugetlbfs /etc/openvswitch/system-id.conf
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83677-5
+
+ Ensure that OpenShift Logging Operator is scanning the cluster
+ OpenShift Logging Operator provides ability to aggregate all the logs from the
+OpenShift Container Platform cluster, such as node system audit logs, application
+container logs, and infrastructure logs. OpenShift Logging aggregates these logs
+from throughout OpenShift cluster and stores them in a default log store. [1]
+
+[1]https://docs.openshift.com/container-platform/4.10/logging/cluster-logging.html
+ This rule's check operates on the cluster configuration dump.
+Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance API endpoint to the local /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance file.
+ AU-3(2)
+ SRG-APP-000092-CTR-000165
+ SRG-APP-000111-CTR-000220
+ OpenShift Logging Operator is able to collect, aggregate, and manage logs.
+ CCE-85918-1
-
+
+
-
+
-
- Verify Group Who Owns The Open vSwitch Persistent System ID
-
-To properly set the group owner of /etc/openvswitch/system-id.conf, run the command:
-$ sudo chgrp hugetlbfs /etc/openvswitch/system-id.conf
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-85928-0
-
-
-
-
-
-
-
-
- Verify Group Who Owns The Open vSwitch Daemon PID File
- Ensure that the file /run/openvswitch/ovs-vswitchd.pid,
-is owned by the group openvswitch or hugetlbfs,
-depending on your settings and Open vSwitch version.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-84129-6
-
-
-
-
-
-
-
-
- Verify Group Who Owns The Open vSwitch Database Server PID
- Ensure that the file /run/openvswitch/ovsdb-server.pid,
-is owned by the group openvswitch or hugetlbfs,
-depending on your settings and Open vSwitch version.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-84166-8
+
+ Record Access Events to Kubernetes Audit Log Directory
+ The audit system should collect access events to read the Kubernetes audit log directory.
+The following audit rule will assure that access to audit log directory are
+collected.
+-a always,exit -F dir=/var/log/kube-apiserver/ -F perm=r -F auid>=1000 -F auid!=unset -F key=access-audit-trail
+If the auditd daemon is configured to use the augenrules
+program to read audit rules during daemon startup (the default), add the
+rule to a file with suffix .rules in the directory
+/etc/audit/rules.d.
+If the auditd daemon is configured to use the auditctl
+utility to read audit rules during daemon startup, add the rule to
+/etc/audit/audit.rules file.
+ AU-2(d)
+ AU-12(c)
+ AC-6(9)
+ CM-6(a)
+ SRG-APP-000343-CTR-000780
+ Attempts to read the logs should be recorded, suspicious access to audit log files could be an indicator of malicious activity on a system.
+Auditing these events could serve as evidence of potential system compromise.'
+
+ CCE-83640-3
+ ---
+#
+
+apiVersion: machineconfiguration.openshift.io/v1
+kind: MachineConfig
+spec:
+ config:
+ ignition:
+ version: 3.1.0
+ storage:
+ files:
+ - contents:
+ source: data:,{{ -a%20always%2Cexit%20-F%20dir%3D/var/log/kube-apiserver/%20-F%20perm%3Dr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Daccess-audit-trail%0A }}
+ mode: 0600
+ path: /etc/audit/rules.d/30-access-var-log-kube-audit.rules
+ overwrite: true
+
-
+
-
+
-
- Verify Group Who Owns The Kubernetes Scheduler Kubeconfig File
-
-To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/configmaps/scheduler-kubeconfig/kubeconfig, run the command:
-$ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/configmaps/scheduler-kubeconfig/kubeconfig
- This rule is only applicable for nodes that run the Kubernetes Scheduler service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.16
- The kubeconfig for the Scheduler contains paramters for the scheduler
-to access the Kube API.
-You should set its file ownership to maintain the integrity of the file.
+
+ Record Access Events to OAuth Audit Log Directory
+ The audit system should collect access events to read the OAuth audit log directory.
+The following audit rule will assure that access to audit log directory are
+collected.
+-a always,exit -F dir=/var/log/oauth-apiserver/ -F perm=r -F auid>=1000 -F auid!=unset -F key=access-audit-trail
+If the auditd daemon is configured to use the augenrules
+program to read audit rules during daemon startup (the default), add the
+rule to a file with suffix .rules in the directory
+/etc/audit/rules.d.
+If the auditd daemon is configured to use the auditctl
+utility to read audit rules during daemon startup, add the rule to
+/etc/audit/audit.rules file.
+ AU-2(d)
+ AU-12(c)
+ AC-6(9)
+ CM-6(a)
+ SRG-APP-000343-CTR-000780
+ Attempts to read the logs should be recorded, suspicious access to audit log files could be an indicator of malicious activity on a system.
+Auditing these events could serve as evidence of potential system compromise.'
- CCE-83471-3
+ CCE-90631-3
+ ---
+#
+
+apiVersion: machineconfiguration.openshift.io/v1
+kind: MachineConfig
+spec:
+ config:
+ ignition:
+ version: 3.1.0
+ storage:
+ files:
+ - contents:
+ source: data:,{{ -a%20always%2Cexit%20-F%20dir%3D/var/log/oauth-apiserver/%20-F%20perm%3Dr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Daccess-audit-trail%0A }}
+ mode: 0600
+ path: /etc/audit/rules.d/30-access-var-log-oauth-audit.rules
+ overwrite: true
+
-
+
-
+
-
- Verify User Who Owns The OpenShift Container Network Interface Files
- To properly set the owner of /etc/cni/net.d/*, run the command: $ sudo chown root /etc/cni/net.d/*
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.10
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83460-6
+
+ Record Access Events to OpenShift Audit Log Directory
+ The audit system should collect access events to read the OpenShift audit log directory.
+The following audit rule will assure that access to audit log directory are
+collected.
+-a always,exit -F dir=/var/log/openshift-apiserver/ -F perm=r -F auid>=1000 -F auid!=unset -F key=access-audit-trail
+If the auditd daemon is configured to use the augenrules
+program to read audit rules during daemon startup (the default), add the
+rule to a file with suffix .rules in the directory
+/etc/audit/rules.d.
+If the auditd daemon is configured to use the auditctl
+utility to read audit rules during daemon startup, add the rule to
+/etc/audit/audit.rules file.
+ AU-2(d)
+ AU-12(c)
+ AC-6(9)
+ CM-6(a)
+ SRG-APP-000343-CTR-000780
+ Attempts to read the logs should be recorded, suspicious access to audit log files could be an indicator of malicious activity on a system.
+Auditing these events could serve as evidence of potential system compromise.'
+
+ CCE-90632-1
+ ---
+#
+
+apiVersion: machineconfiguration.openshift.io/v1
+kind: MachineConfig
+spec:
+ config:
+ ignition:
+ version: 3.1.0
+ storage:
+ files:
+ - contents:
+ source: data:,{{ -a%20always%2Cexit%20-F%20dir%3D/var/log/openshift-apiserver/%20-F%20perm%3Dr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Daccess-audit-trail%0A }}
+ mode: 0600
+ path: /etc/audit/rules.d/30-access-var-log-ocp-audit.rules
+ overwrite: true
+
-
+
-
+
-
- Verify User Who Owns The OpenShift Controller Manager Kubeconfig File
+
+ The Kubernetes Audit Logs Directory Must Have Mode 0700
-To properly set the owner of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig, run the command:
-$ sudo chown root /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig
- This rule is only applicable for nodes that run the Kubernetes Controller Manager service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.18
- The Controller Manager's kubeconfig contains information about how the
-component will access the API server. You should set its file ownership to
-maintain the integrity of the file.
+To properly set the permissions of /var/log/kube-apiserver/, run the command:
+$ sudo chmod 0700 /var/log/kube-apiserver/
+ 1
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 18
+ 19
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ APO01.06
+ APO11.04
+ APO12.06
+ BAI03.05
+ BAI08.02
+ DSS02.02
+ DSS02.04
+ DSS02.07
+ DSS03.01
+ DSS05.04
+ DSS05.07
+ DSS06.02
+ MEA02.01
+ 4.2.3.10
+ 4.3.3.3.9
+ 4.3.3.5.8
+ 4.3.3.7.3
+ 4.3.4.4.7
+ 4.3.4.5.6
+ 4.3.4.5.7
+ 4.3.4.5.8
+ 4.4.2.1
+ 4.4.2.2
+ 4.4.2.4
+ SR 2.1
+ SR 2.10
+ SR 2.11
+ SR 2.12
+ SR 2.8
+ SR 2.9
+ SR 5.2
+ SR 6.1
+ A.10.1.1
+ A.11.1.4
+ A.11.1.5
+ A.11.2.1
+ A.12.4.1
+ A.12.4.2
+ A.12.4.3
+ A.12.4.4
+ A.12.7.1
+ A.13.1.1
+ A.13.1.3
+ A.13.2.1
+ A.13.2.3
+ A.13.2.4
+ A.14.1.2
+ A.14.1.3
+ A.16.1.4
+ A.16.1.5
+ A.16.1.7
+ A.6.1.2
+ A.7.1.1
+ A.7.1.2
+ A.7.3.1
+ A.8.2.2
+ A.8.2.3
+ A.9.1.1
+ A.9.1.2
+ A.9.2.3
+ A.9.4.1
+ A.9.4.4
+ A.9.4.5
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.2
+ CIP-003-8 R5.3
+ CIP-004-6 R2.3
+ CIP-004-6 R3.3
+ CIP-007-3 R2.1
+ CIP-007-3 R2.2
+ CIP-007-3 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.1.2
+ CIP-007-3 R6.5
+ CM-6(a)
+ AC-6(1)
+ AU-9
+ DE.AE-3
+ DE.AE-5
+ PR.AC-4
+ PR.DS-5
+ PR.PT-1
+ RS.AN-1
+ RS.AN-4
+ Req-10.5.2
+ SRG-APP-000118-CTR-000240
+ SRG-APP-000119-CTR-000245
+ SRG-APP-000120-CTR-000250
+ SRG-APP-000121-CTR-000255
+ SRG-APP-000122-CTR-000260
+ SRG-APP-000123-CTR-000265
+ If users can write to audit logs, audit trails can be modified or destroyed.
- CCE-83904-3
+ CCE-83645-2
-
+
-
+
-
- Verify User Who Owns The Etcd Database Directory
+
+ The OAuth Audit Logs Directory Must Have Mode 0700
-To properly set the owner of /var/lib/etcd/member/, run the command:
-$ sudo chown root /var/lib/etcd/member/
- This rule is only applicable for nodes that run the Etcd service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.12
- etcd is a highly-available key-value store used by Kubernetes deployments for
-persistent storage of all of its REST API objects. This data directory should
-be protected from any unauthorized reads or writes.
+To properly set the permissions of /var/log/oauth-apiserver/, run the command:
+$ sudo chmod 0700 /var/log/oauth-apiserver/
+ 1
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 18
+ 19
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ APO01.06
+ APO11.04
+ APO12.06
+ BAI03.05
+ BAI08.02
+ DSS02.02
+ DSS02.04
+ DSS02.07
+ DSS03.01
+ DSS05.04
+ DSS05.07
+ DSS06.02
+ MEA02.01
+ 4.2.3.10
+ 4.3.3.3.9
+ 4.3.3.5.8
+ 4.3.3.7.3
+ 4.3.4.4.7
+ 4.3.4.5.6
+ 4.3.4.5.7
+ 4.3.4.5.8
+ 4.4.2.1
+ 4.4.2.2
+ 4.4.2.4
+ SR 2.1
+ SR 2.10
+ SR 2.11
+ SR 2.12
+ SR 2.8
+ SR 2.9
+ SR 5.2
+ SR 6.1
+ A.10.1.1
+ A.11.1.4
+ A.11.1.5
+ A.11.2.1
+ A.12.4.1
+ A.12.4.2
+ A.12.4.3
+ A.12.4.4
+ A.12.7.1
+ A.13.1.1
+ A.13.1.3
+ A.13.2.1
+ A.13.2.3
+ A.13.2.4
+ A.14.1.2
+ A.14.1.3
+ A.16.1.4
+ A.16.1.5
+ A.16.1.7
+ A.6.1.2
+ A.7.1.1
+ A.7.1.2
+ A.7.3.1
+ A.8.2.2
+ A.8.2.3
+ A.9.1.1
+ A.9.1.2
+ A.9.2.3
+ A.9.4.1
+ A.9.4.4
+ A.9.4.5
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.2
+ CIP-003-8 R5.3
+ CIP-004-6 R2.3
+ CIP-004-6 R3.3
+ CIP-007-3 R2.1
+ CIP-007-3 R2.2
+ CIP-007-3 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.1.2
+ CIP-007-3 R6.5
+ CM-6(a)
+ AC-6(1)
+ AU-9
+ DE.AE-3
+ DE.AE-5
+ PR.AC-4
+ PR.DS-5
+ PR.PT-1
+ RS.AN-1
+ RS.AN-4
+ Req-10.5.2
+ SRG-APP-000118-CTR-000240
+ SRG-APP-000119-CTR-000245
+ SRG-APP-000120-CTR-000250
+ SRG-APP-000121-CTR-000255
+ SRG-APP-000122-CTR-000260
+ SRG-APP-000123-CTR-000265
+ If users can write to audit logs, audit trails can be modified or destroyed.
- CCE-83905-0
+ CCE-90633-9
-
+
-
+
-
- Verify User Who Owns The Etcd Write-Ahead-Log Files
+
+ The OpenShift Audit Logs Directory Must Have Mode 0700
-To properly set the owner of /var/lib/etcd/member/wal/*, run the command:
-$ sudo chown root /var/lib/etcd/member/wal/*
- This rule is only applicable for nodes that run the Etcd service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.12
- etcd is a highly-available key-value store used by Kubernetes deployments for
-persistent storage of all of its REST API objects. This data directory should
-be protected from any unauthorized reads or writes.
-
- CCE-84010-8
-
-
-
-
-
-
-
-
- Verify User Who Owns The Etcd Member Pod Specification File
- To properly set the owner of /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml, run the command: $ sudo chown root /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml
- This rule is only applicable for nodes that run the Etcd service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.8
- The etcd pod specification file controls various parameters that
-set the behavior of the etcd service in the master node. etcd is a
-highly-available key-value store which Kubernetes uses for persistent
-storage of all of its REST API object. You should restrict its file
-permissions to maintain the integrity of the file. The file should be
-writable by only the administrators on the system.
-
- CCE-83988-6
-
-
-
-
-
-
-
-
- Verify User Who Owns The Etcd PKI Certificate Files
-
-To properly set the owner of /etc/kubernetes/static-pod-resources/*/*/*/*.crt, run the command:
-$ sudo chown root /etc/kubernetes/static-pod-resources/*/*/*/*.crt
- This rule is only applicable for nodes that run the Etcd service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.19
- OpenShift makes use of a number of certificates as part of its operation.
-You should verify the ownership of the directory containing the PKI
-information and all files in that directory to maintain their integrity.
-The directory and files should be owned by the system administrator.
+To properly set the permissions of /var/log/openshift-apiserver/, run the command:
+$ sudo chmod 0700 /var/log/openshift-apiserver/
+ 1
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 18
+ 19
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ APO01.06
+ APO11.04
+ APO12.06
+ BAI03.05
+ BAI08.02
+ DSS02.02
+ DSS02.04
+ DSS02.07
+ DSS03.01
+ DSS05.04
+ DSS05.07
+ DSS06.02
+ MEA02.01
+ 4.2.3.10
+ 4.3.3.3.9
+ 4.3.3.5.8
+ 4.3.3.7.3
+ 4.3.4.4.7
+ 4.3.4.5.6
+ 4.3.4.5.7
+ 4.3.4.5.8
+ 4.4.2.1
+ 4.4.2.2
+ 4.4.2.4
+ SR 2.1
+ SR 2.10
+ SR 2.11
+ SR 2.12
+ SR 2.8
+ SR 2.9
+ SR 5.2
+ SR 6.1
+ A.10.1.1
+ A.11.1.4
+ A.11.1.5
+ A.11.2.1
+ A.12.4.1
+ A.12.4.2
+ A.12.4.3
+ A.12.4.4
+ A.12.7.1
+ A.13.1.1
+ A.13.1.3
+ A.13.2.1
+ A.13.2.3
+ A.13.2.4
+ A.14.1.2
+ A.14.1.3
+ A.16.1.4
+ A.16.1.5
+ A.16.1.7
+ A.6.1.2
+ A.7.1.1
+ A.7.1.2
+ A.7.3.1
+ A.8.2.2
+ A.8.2.3
+ A.9.1.1
+ A.9.1.2
+ A.9.2.3
+ A.9.4.1
+ A.9.4.4
+ A.9.4.5
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.2
+ CIP-003-8 R5.3
+ CIP-004-6 R2.3
+ CIP-004-6 R3.3
+ CIP-007-3 R2.1
+ CIP-007-3 R2.2
+ CIP-007-3 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.1.2
+ CIP-007-3 R6.5
+ CM-6(a)
+ AC-6(1)
+ AU-9
+ DE.AE-3
+ DE.AE-5
+ PR.AC-4
+ PR.DS-5
+ PR.PT-1
+ RS.AN-1
+ RS.AN-4
+ Req-10.5.2
+ SRG-APP-000118-CTR-000240
+ SRG-APP-000119-CTR-000245
+ SRG-APP-000120-CTR-000250
+ SRG-APP-000121-CTR-000255
+ SRG-APP-000122-CTR-000260
+ SRG-APP-000123-CTR-000265
+ If users can write to audit logs, audit trails can be modified or destroyed.
- CCE-83898-7
-
-
-
-
-
-
-
-
- Verify User Who Owns The OpenShift SDN Container Network Interface Plugin IP Address Allocations
- To properly set the owner of /var/lib/cni/networks/openshift-sdn/.*, run the command: $ sudo chown root /var/lib/cni/networks/openshift-sdn/.*
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.10
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-84248-4
+ CCE-90634-7
-
+
-
+
-
- Verify User Who Owns The Kubernetes API Server Pod Specification File
- To properly set the owner of /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml, run the command: $ sudo chown root /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml
- This rule is only applicable for nodes that run the Kubernetes API Server service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.2
- The Kubernetes specification file contains information about the configuration of the
-Kubernetes API Server that is configured on the system. Protection of this file is
-critical for OpenShift security.
+
+ Kubernetes Audit Logs Must Be Owned By Root
+ All audit logs must be owned by root user and group. By default, the path for the Kubernetes audit log is /var/log/kube-apiserver/.
+
+To properly set the owner of /var/log/kube-apiserver, run the command:
+$ sudo chown root /var/log/kube-apiserver
+
+To properly set the owner of /var/log/kube-apiserver/*, run the command:
+$ sudo chown root /var/log/kube-apiserver/*
+ 1
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 18
+ 19
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 5.4.1.1
+ APO01.06
+ APO11.04
+ APO12.06
+ BAI03.05
+ BAI08.02
+ DSS02.02
+ DSS02.04
+ DSS02.07
+ DSS03.01
+ DSS05.04
+ DSS05.07
+ DSS06.02
+ MEA02.01
+ 3.3.1
+ CCI-000162
+ CCI-000163
+ CCI-000164
+ CCI-001314
+ 4.2.3.10
+ 4.3.3.3.9
+ 4.3.3.5.8
+ 4.3.3.7.3
+ 4.3.4.4.7
+ 4.3.4.5.6
+ 4.3.4.5.7
+ 4.3.4.5.8
+ 4.4.2.1
+ 4.4.2.2
+ 4.4.2.4
+ SR 2.1
+ SR 2.10
+ SR 2.11
+ SR 2.12
+ SR 2.8
+ SR 2.9
+ SR 5.2
+ SR 6.1
+ A.10.1.1
+ A.11.1.4
+ A.11.1.5
+ A.11.2.1
+ A.12.4.1
+ A.12.4.2
+ A.12.4.3
+ A.12.4.4
+ A.12.7.1
+ A.13.1.1
+ A.13.1.3
+ A.13.2.1
+ A.13.2.3
+ A.13.2.4
+ A.14.1.2
+ A.14.1.3
+ A.16.1.4
+ A.16.1.5
+ A.16.1.7
+ A.6.1.2
+ A.7.1.1
+ A.7.1.2
+ A.7.3.1
+ A.8.2.2
+ A.8.2.3
+ A.9.1.1
+ A.9.1.2
+ A.9.2.3
+ A.9.4.1
+ A.9.4.4
+ A.9.4.5
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.3
+ CIP-004-6 R2.3
+ CIP-007-3 R2.1
+ CIP-007-3 R2.2
+ CIP-007-3 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.1.2
+ CM-6(a)
+ AC-6(1)
+ AU-9(4)
+ DE.AE-3
+ DE.AE-5
+ PR.AC-4
+ PR.DS-5
+ PR.PT-1
+ RS.AN-1
+ RS.AN-4
+ Req-10.5.2
+ SRG-OS-000057-GPOS-00027
+ SRG-OS-000058-GPOS-00028
+ SRG-OS-000059-GPOS-00029
+ SRG-OS-000206-GPOS-00084
+ Unauthorized disclosure of audit records can reveal system and configuration data to
+attackers, thus compromising its confidentiality.
- CCE-83372-3
+ CCE-83650-2
-
+
-
+
-
- Verify User Who Owns The Kubernetes Controller Manager Pod Specificiation File
- To properly set the owner of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml, run the command: $ sudo chown root /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml
- This rule is only applicable for nodes that run the Kubernetes Controller Manager service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.4
- The Kubernetes specification file contains information about the configuration of the
-Kubernetes Controller Manager Server that is configured on the system. Protection of this file is
-critical for OpenShift security.
+
+ OAuth Audit Logs Must Be Owned By Root
+ All audit logs must be owned by root user and group. By default, the path for the OAuth audit log is /var/log/oauth-apiserver/.
+
+To properly set the owner of /var/log/oauth-apiserver, run the command:
+$ sudo chown root /var/log/oauth-apiserver
+
+To properly set the owner of /var/log/oauth-apiserver/*, run the command:
+$ sudo chown root /var/log/oauth-apiserver/*
+ 1
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 18
+ 19
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 5.4.1.1
+ APO01.06
+ APO11.04
+ APO12.06
+ BAI03.05
+ BAI08.02
+ DSS02.02
+ DSS02.04
+ DSS02.07
+ DSS03.01
+ DSS05.04
+ DSS05.07
+ DSS06.02
+ MEA02.01
+ 3.3.1
+ CCI-000162
+ CCI-000163
+ CCI-000164
+ CCI-001314
+ 4.2.3.10
+ 4.3.3.3.9
+ 4.3.3.5.8
+ 4.3.3.7.3
+ 4.3.4.4.7
+ 4.3.4.5.6
+ 4.3.4.5.7
+ 4.3.4.5.8
+ 4.4.2.1
+ 4.4.2.2
+ 4.4.2.4
+ SR 2.1
+ SR 2.10
+ SR 2.11
+ SR 2.12
+ SR 2.8
+ SR 2.9
+ SR 5.2
+ SR 6.1
+ A.10.1.1
+ A.11.1.4
+ A.11.1.5
+ A.11.2.1
+ A.12.4.1
+ A.12.4.2
+ A.12.4.3
+ A.12.4.4
+ A.12.7.1
+ A.13.1.1
+ A.13.1.3
+ A.13.2.1
+ A.13.2.3
+ A.13.2.4
+ A.14.1.2
+ A.14.1.3
+ A.16.1.4
+ A.16.1.5
+ A.16.1.7
+ A.6.1.2
+ A.7.1.1
+ A.7.1.2
+ A.7.3.1
+ A.8.2.2
+ A.8.2.3
+ A.9.1.1
+ A.9.1.2
+ A.9.2.3
+ A.9.4.1
+ A.9.4.4
+ A.9.4.5
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.3
+ CIP-004-6 R2.3
+ CIP-007-3 R2.1
+ CIP-007-3 R2.2
+ CIP-007-3 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.1.2
+ CM-6(a)
+ AC-6(1)
+ AU-9(4)
+ DE.AE-3
+ DE.AE-5
+ PR.AC-4
+ PR.DS-5
+ PR.PT-1
+ RS.AN-1
+ RS.AN-4
+ Req-10.5.2
+ SRG-OS-000057-GPOS-00027
+ SRG-OS-000058-GPOS-00028
+ SRG-OS-000059-GPOS-00029
+ SRG-OS-000206-GPOS-00084
+ Unauthorized disclosure of audit records can reveal system and configuration data to
+attackers, thus compromising its confidentiality.
- CCE-83795-5
+ CCE-90635-4
-
+
-
+
-
- Verify User Who Owns The Kubernetes Scheduler Pod Specification File
- To properly set the owner of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml, run the command: $ sudo chown root /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml
- This rule is only applicable for nodes that run the Kubernetes Scheduler service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.6
- The Kubernetes specification file contains information about the configuration of the
-Kubernetes scheduler that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
- CCE-83393-9
-
-
-
-
-
-
-
-
- Verify User Who Owns The OpenShift Admin Kubeconfig File
- To properly set the owner of /etc/kubernetes/kubeconfig, run the command: $ sudo chown root /etc/kubernetes/kubeconfig
- 1.1.14
- The /etc/kubernetes/kubeconfig file contains information about the administrative configuration of the
-OpenShift cluster that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
-
-
-
-
- Verify User Who Owns The OpenShift Admin Kubeconfig Files
-
-To properly set the owner of /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig, run the command:
-$ sudo chown root /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig
- This rule is only applicable for nodes that run the Kubernetes Control Plane.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.14
- There are various kubeconfig files that can be used by the administrator,
-defining various settings for the administration of the cluster. These files
-contain credentials that can be used to control the cluster and are needed
-for disaster recovery and each kubeconfig points to a different endpoint in
-the cluster. You should restrict its file permissions to maintain the
-integrity of the kubeconfig file as an attacker who gains access to these
-files can take over the cluster.
+
+ OpenShift Audit Logs Must Be Owned By Root
+ All audit logs must be owned by root user and group. By default, the path for the OpenShift audit log is /var/log/openshift-apiserver/.
+
+To properly set the owner of /var/log/openshift-apiserver, run the command:
+$ sudo chown root /var/log/openshift-apiserver
+
+To properly set the owner of /var/log/openshift-apiserver/*, run the command:
+$ sudo chown root /var/log/openshift-apiserver/*
+ 1
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 18
+ 19
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 5.4.1.1
+ APO01.06
+ APO11.04
+ APO12.06
+ BAI03.05
+ BAI08.02
+ DSS02.02
+ DSS02.04
+ DSS02.07
+ DSS03.01
+ DSS05.04
+ DSS05.07
+ DSS06.02
+ MEA02.01
+ 3.3.1
+ CCI-000162
+ CCI-000163
+ CCI-000164
+ CCI-001314
+ 4.2.3.10
+ 4.3.3.3.9
+ 4.3.3.5.8
+ 4.3.3.7.3
+ 4.3.4.4.7
+ 4.3.4.5.6
+ 4.3.4.5.7
+ 4.3.4.5.8
+ 4.4.2.1
+ 4.4.2.2
+ 4.4.2.4
+ SR 2.1
+ SR 2.10
+ SR 2.11
+ SR 2.12
+ SR 2.8
+ SR 2.9
+ SR 5.2
+ SR 6.1
+ A.10.1.1
+ A.11.1.4
+ A.11.1.5
+ A.11.2.1
+ A.12.4.1
+ A.12.4.2
+ A.12.4.3
+ A.12.4.4
+ A.12.7.1
+ A.13.1.1
+ A.13.1.3
+ A.13.2.1
+ A.13.2.3
+ A.13.2.4
+ A.14.1.2
+ A.14.1.3
+ A.16.1.4
+ A.16.1.5
+ A.16.1.7
+ A.6.1.2
+ A.7.1.1
+ A.7.1.2
+ A.7.3.1
+ A.8.2.2
+ A.8.2.3
+ A.9.1.1
+ A.9.1.2
+ A.9.2.3
+ A.9.4.1
+ A.9.4.4
+ A.9.4.5
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.3
+ CIP-004-6 R2.3
+ CIP-007-3 R2.1
+ CIP-007-3 R2.2
+ CIP-007-3 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.1.2
+ CM-6(a)
+ AC-6(1)
+ AU-9(4)
+ DE.AE-3
+ DE.AE-5
+ PR.AC-4
+ PR.DS-5
+ PR.PT-1
+ RS.AN-1
+ RS.AN-4
+ Req-10.5.2
+ SRG-OS-000057-GPOS-00027
+ SRG-OS-000058-GPOS-00028
+ SRG-OS-000059-GPOS-00029
+ SRG-OS-000206-GPOS-00084
+ Unauthorized disclosure of audit records can reveal system and configuration data to
+attackers, thus compromising its confidentiality.
- CCE-83719-5
-
-
-
-
-
-
-
-
- Verify User Who Owns The OpenShift Multus Container Network Interface Plugin Files
- To properly set the owner of /var/run/multus/cni/net.d/*, run the command: $ sudo chown root /var/run/multus/cni/net.d/*
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.10
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83603-1
+ CCE-90636-2
-
+
-
+
-
- Verify User Who Owns The OpenShift PKI Certificate Files
+
+ Kubernetes Audit Logs Must Have Mode 0600
-To properly set the owner of /etc/kubernetes/static-pod-resources/*/*/*/tls.crt, run the command:
-$ sudo chown root /etc/kubernetes/static-pod-resources/*/*/*/tls.crt
- This rule is only applicable for nodes that run the Kubernetes Control Plane.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.19
- OpenShift makes use of a number of certificates as part of its operation.
-You should verify the ownership of the directory containing the PKI
-information and all files in that directory to maintain their integrity.
+To properly set the permissions of /var/log/kube-apiserver/.*, run the command:
+$ sudo chmod 0600 /var/log/kube-apiserver/.*
+ 1
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 18
+ 19
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 5.4.1.1
+ APO01.06
+ APO11.04
+ APO12.06
+ BAI03.05
+ BAI08.02
+ DSS02.02
+ DSS02.04
+ DSS02.07
+ DSS03.01
+ DSS05.04
+ DSS05.07
+ DSS06.02
+ MEA02.01
+ 3.3.1
+ 4.2.3.10
+ 4.3.3.3.9
+ 4.3.3.5.8
+ 4.3.3.7.3
+ 4.3.4.4.7
+ 4.3.4.5.6
+ 4.3.4.5.7
+ 4.3.4.5.8
+ 4.4.2.1
+ 4.4.2.2
+ 4.4.2.4
+ SR 2.1
+ SR 2.10
+ SR 2.11
+ SR 2.12
+ SR 2.8
+ SR 2.9
+ SR 5.2
+ SR 6.1
+ A.10.1.1
+ A.11.1.4
+ A.11.1.5
+ A.11.2.1
+ A.12.4.1
+ A.12.4.2
+ A.12.4.3
+ A.12.4.4
+ A.12.7.1
+ A.13.1.1
+ A.13.1.3
+ A.13.2.1
+ A.13.2.3
+ A.13.2.4
+ A.14.1.2
+ A.14.1.3
+ A.16.1.4
+ A.16.1.5
+ A.16.1.7
+ A.6.1.2
+ A.7.1.1
+ A.7.1.2
+ A.7.3.1
+ A.8.2.2
+ A.8.2.3
+ A.9.1.1
+ A.9.1.2
+ A.9.2.3
+ A.9.4.1
+ A.9.4.4
+ A.9.4.5
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.3
+ CIP-004-6 R2.3
+ CIP-007-3 R2.1
+ CIP-007-3 R2.2
+ CIP-007-3 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.1.2
+ CM-6(a)
+ AC-6(1)
+ AU-9(4)
+ DE.AE-3
+ DE.AE-5
+ PR.AC-4
+ PR.DS-5
+ PR.PT-1
+ RS.AN-1
+ RS.AN-4
+ Req-10.5.2
+ If users can write to audit logs, audit trails can be modified or destroyed.
- CCE-83558-7
+ CCE-83654-4
-
+
-
+
-
- Verify User Who Owns The OpenShift PKI Private Key Files
+
+ OAuth Audit Logs Must Have Mode 0600
-To properly set the owner of /etc/kubernetes/static-pod-resources/*/*/*/*.key, run the command:
-$ sudo chown root /etc/kubernetes/static-pod-resources/*/*/*/*.key
- This rule is only applicable for nodes that run the Kubernetes Control Plane.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.19
- OpenShift makes use of a number of certificates as part of its operation.
-You should verify the ownership of the directory containing the PKI
-information and all files in that directory to maintain their integrity.
-The directory and files should be owned by root:root.
+To properly set the permissions of /var/log/oauth-apiserver/.*, run the command:
+$ sudo chmod 0600 /var/log/oauth-apiserver/.*
+ 1
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 18
+ 19
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 5.4.1.1
+ APO01.06
+ APO11.04
+ APO12.06
+ BAI03.05
+ BAI08.02
+ DSS02.02
+ DSS02.04
+ DSS02.07
+ DSS03.01
+ DSS05.04
+ DSS05.07
+ DSS06.02
+ MEA02.01
+ 3.3.1
+ 4.2.3.10
+ 4.3.3.3.9
+ 4.3.3.5.8
+ 4.3.3.7.3
+ 4.3.4.4.7
+ 4.3.4.5.6
+ 4.3.4.5.7
+ 4.3.4.5.8
+ 4.4.2.1
+ 4.4.2.2
+ 4.4.2.4
+ SR 2.1
+ SR 2.10
+ SR 2.11
+ SR 2.12
+ SR 2.8
+ SR 2.9
+ SR 5.2
+ SR 6.1
+ A.10.1.1
+ A.11.1.4
+ A.11.1.5
+ A.11.2.1
+ A.12.4.1
+ A.12.4.2
+ A.12.4.3
+ A.12.4.4
+ A.12.7.1
+ A.13.1.1
+ A.13.1.3
+ A.13.2.1
+ A.13.2.3
+ A.13.2.4
+ A.14.1.2
+ A.14.1.3
+ A.16.1.4
+ A.16.1.5
+ A.16.1.7
+ A.6.1.2
+ A.7.1.1
+ A.7.1.2
+ A.7.3.1
+ A.8.2.2
+ A.8.2.3
+ A.9.1.1
+ A.9.1.2
+ A.9.2.3
+ A.9.4.1
+ A.9.4.4
+ A.9.4.5
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.3
+ CIP-004-6 R2.3
+ CIP-007-3 R2.1
+ CIP-007-3 R2.2
+ CIP-007-3 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.1.2
+ CM-6(a)
+ AC-6(1)
+ AU-9(4)
+ DE.AE-3
+ DE.AE-5
+ PR.AC-4
+ PR.DS-5
+ PR.PT-1
+ RS.AN-1
+ RS.AN-4
+ Req-10.5.2
+ If users can write to audit logs, audit trails can be modified or destroyed.
- CCE-83435-8
-
-
-
-
-
-
-
-
- Verify User Who Owns The OpenShift SDN CNI Server Config
-
-To properly set the owner of /var/run/openshift-sdn/cniserver/config.json, run the command:
-$ sudo chown root /var/run/openshift-sdn/cniserver/config.json
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83932-4
-
-
-
-
-
-
-
-
- Verify User Who Owns The OpenShift Open vSwitch Files
- To properly set the owner of /etc/openvswitch/.*, run the command: $ sudo chown root /etc/openvswitch/.*
- 1.1.10
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
+ CCE-90637-0
-
+
-
+
-
- Verify User Who Owns The Open vSwitch Configuration Database
+
+ OpenShift Audit Logs Must Have Mode 0600
-To properly set the owner of /etc/openvswitch/conf.db, run the command:
-$ sudo chown openvswitch /etc/openvswitch/conf.db
- CIP-003-8 R6
- CIP-004-6 R3
- CIP-007-3 R6.1
- CM-6
- CM-6(1)
- SRG-APP-000516-CTR-001325
- SRG-APP-000516-CTR-001330
- SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83489-5
+To properly set the permissions of /var/log/openshift-apiserver/.*, run the command:
+$ sudo chmod 0600 /var/log/openshift-apiserver/.*
+ 1
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 18
+ 19
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 5.4.1.1
+ APO01.06
+ APO11.04
+ APO12.06
+ BAI03.05
+ BAI08.02
+ DSS02.02
+ DSS02.04
+ DSS02.07
+ DSS03.01
+ DSS05.04
+ DSS05.07
+ DSS06.02
+ MEA02.01
+ 3.3.1
+ 4.2.3.10
+ 4.3.3.3.9
+ 4.3.3.5.8
+ 4.3.3.7.3
+ 4.3.4.4.7
+ 4.3.4.5.6
+ 4.3.4.5.7
+ 4.3.4.5.8
+ 4.4.2.1
+ 4.4.2.2
+ 4.4.2.4
+ SR 2.1
+ SR 2.10
+ SR 2.11
+ SR 2.12
+ SR 2.8
+ SR 2.9
+ SR 5.2
+ SR 6.1
+ A.10.1.1
+ A.11.1.4
+ A.11.1.5
+ A.11.2.1
+ A.12.4.1
+ A.12.4.2
+ A.12.4.3
+ A.12.4.4
+ A.12.7.1
+ A.13.1.1
+ A.13.1.3
+ A.13.2.1
+ A.13.2.3
+ A.13.2.4
+ A.14.1.2
+ A.14.1.3
+ A.16.1.4
+ A.16.1.5
+ A.16.1.7
+ A.6.1.2
+ A.7.1.1
+ A.7.1.2
+ A.7.3.1
+ A.8.2.2
+ A.8.2.3
+ A.9.1.1
+ A.9.1.2
+ A.9.2.3
+ A.9.4.1
+ A.9.4.4
+ A.9.4.5
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.3
+ CIP-004-6 R2.3
+ CIP-007-3 R2.1
+ CIP-007-3 R2.2
+ CIP-007-3 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.1
+ CIP-007-3 R5.1.2
+ CM-6(a)
+ AC-6(1)
+ AU-9(4)
+ DE.AE-3
+ DE.AE-5
+ PR.AC-4
+ PR.DS-5
+ PR.PT-1
+ RS.AN-1
+ RS.AN-4
+ Req-10.5.2
+ If users can write to audit logs, audit trails can be modified or destroyed.
+
+ CCE-90638-8
-
+
-
+
-
- Verify User Who Owns The Open vSwitch Configuration Database Lock
-
-To properly set the owner of /etc/openvswitch/.conf.db.~lock~, run the command:
-$ sudo chown openvswitch /etc/openvswitch/.conf.db.~lock~
+
+ Ensure /var/log/kube-apiserver Located On Separate Partition
+ Kubernetes API server audit logs are stored in the
+/var/log/kube-apiserver directory.
+
+Partitioning Red Hat CoreOS is a Day 1 operation and cannot
+be changed afterwards. For documentation on how to add a
+MachineConfig manifest that specifies a separate /var/log/kube-apiserver
+partition, follow:
+
+ https://docs.openshift.com/container-platform/latest/installing/installing_platform_agnostic/installing-platform-agnostic.html#installation-user-infra-machines-advanced_disk_installing-platform-agnostic
+
+
+Note that the Red Hat OpenShift documentation often references a block
+device, such as /dev/vda. The name of the available block devices depends
+on the underlying infrastructure (bare metal vs cloud), and often the specific
+instance type. For example in AWS, some instance types have NVMe drives
+(/dev/nvme*), others use /dev/xvda*.
+
+You will need to look for relevant documentation for your infrastructure around this.
+In many cases, the simplest thing is to boot a single machine with an Ignition
+configuration that just gives you SSH access, and inspect the block devices via
+e.g. the lsblk command.
+
+For physical hardware, a good best practice is to reference devices via the
+/dev/disk/by-id/ or /dev/disk/by-path links.
+
+ AU-4
+ Req-10.5.3
+ Req-10.5.4
+ SRG-APP-000357-CTR-000800
+ Placing /var/log/kube-apiserver in its own partition
+enables better separation between Kubernetes API server audit
+files and other log files, and helps ensure that
+auditing cannot be halted due to the partition running out
+of space.
+
+ CCE-86456-1
+
+
+
+
+
+ Ensure /var/log/oauth-apiserver Located On Separate Partition
+ OpenShift OAuth server audit logs are stored in the
+/var/log/oauth-apiserver directory.
+
+Partitioning Red Hat CoreOS is a Day 1 operation and cannot
+be changed afterwards. For documentation on how to add a
+MachineConfig manifest that specifies a separate /var/log/oauth-apiserver
+partition, follow:
+
+ https://docs.openshift.com/container-platform/latest/installing/installing_platform_agnostic/installing-platform-agnostic.html#installation-user-infra-machines-advanced_disk_installing-platform-agnostic
+
+
+Note that the Red Hat OpenShift documentation often references a block
+device, such as /dev/vda. The name of the available block devices depends
+on the underlying infrastructure (bare metal vs cloud), and often the specific
+instance type. For example in AWS, some instance types have NVMe drives
+(/dev/nvme*), others use /dev/xvda*.
+
+You will need to look for relevant documentation for your infrastructure around this.
+In many cases, the simplest thing is to boot a single machine with an Ignition
+configuration that just gives you SSH access, and inspect the block devices via
+e.g. the lsblk command.
+
+For physical hardware, a good best practice is to reference devices via the
+/dev/disk/by-id/ or /dev/disk/by-path links.
+
+ AU-4
+ Req-10.5.3
+ Req-10.5.4
+ SRG-APP-000357-CTR-000800
+ Placing /var/log/oauth-apiserver in its own partition
+enables better separation between OpenShift OAuth server audit
+files and other log files, and helps ensure that
+auditing cannot be halted due to the partition running out
+of space.
+
+ CCE-85954-6
+
+
+
+
+
+ Ensure /var/log/openshift-apiserver Located On Separate Partition
+ Openshift API server audit logs are stored in the
+/var/log/openshift-apiserver directory.
+
+Partitioning Red Hat CoreOS is a Day 1 operation and cannot
+be changed afterwards. For documentation on how to add a
+MachineConfig manifest that specifies a separate /var/log/openshift-apiserver
+partition, follow:
+
+ https://docs.openshift.com/container-platform/latest/installing/installing_platform_agnostic/installing-platform-agnostic.html#installation-user-infra-machines-advanced_disk_installing-platform-agnostic
+
+
+Note that the Red Hat OpenShift documentation often references a block
+device, such as /dev/vda. The name of the available block devices depends
+on the underlying infrastructure (bare metal vs cloud), and often the specific
+instance type. For example in AWS, some instance types have NVMe drives
+(/dev/nvme*), others use /dev/xvda*.
+
+You will need to look for relevant documentation for your infrastructure around this.
+In many cases, the simplest thing is to boot a single machine with an Ignition
+configuration that just gives you SSH access, and inspect the block devices via
+e.g. the lsblk command.
+
+For physical hardware, a good best practice is to reference devices via the
+/dev/disk/by-id/ or /dev/disk/by-path links.
+
+ AU-4
+ Req-10.5.3
+ Req-10.5.4
+ SRG-APP-000357-CTR-000800
+ Placing /var/log/openshift-apiserver in its own partition
+enables better separation between Openshift API server audit
+files and other log files, and helps ensure that
+auditing cannot be halted due to the partition running out
+of space.
+
+ CCE-86094-0
+
+
+
+
+
+
+ OpenShift - Master Node Settings
+ Contains evaluations for the master node configuration settings.
+
+ Verify Group Who Owns The OpenShift Container Network Interface Files
+ To properly set the group owner of /etc/cni/net.d/*, run the command: $ sudo chgrp root /etc/cni/net.d/*
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -12681,25 +14706,28 @@ To properly set the owner of /etc/openvswitch/.conf.db.~lock~SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
+ 1.1.10
CNI (Container Network Interface) files consist of a specification and libraries for
writing plugins to configure network interfaces in Linux containers, along with a number
of supported plugins. Allowing writeable access to the files could allow an attacker to modify
the networking configuration potentially adding a rogue network connection.
- CCE-83462-2
+ CCE-84025-6
-
+
-
+
-
- Verify User Who Owns The Open vSwitch Process ID File
+
+ Verify Group Who Owns The OpenShift Controller Manager Kubeconfig File
-To properly set the owner of /var/run/openvswitch/ovs-vswitchd.pid, run the command:
-$ sudo chown openvswitch /var/run/openvswitch/ovs-vswitchd.pid
+To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig, run the command:
+$ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig
+ This rule is only applicable for nodes that run the Kubernetes Controller
+Manager service. The aforementioned service is only running on
+the nodes labeled "master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -12708,25 +14736,27 @@ To properly set the owner of /var/run/openvswitch/ovs-vswitchd.pidSRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83937-3
+ 1.1.18
+ The Controller Manager's kubeconfig contains information about how the
+component will access the API server. You should set its file ownership to
+maintain the integrity of the file.
+
+ CCE-84095-9
-
+
-
+
-
- Verify User Who Owns The Open vSwitch Persistent System ID
+
+ Verify Group Who Owns The Etcd Database Directory
-To properly set the owner of /etc/openvswitch/system-id.conf, run the command:
-$ sudo chown openvswitch /etc/openvswitch/system-id.conf
+To properly set the group owner of /var/lib/etcd/member/, run the command:
+$ sudo chgrp root /var/lib/etcd/member/
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -12735,25 +14765,27 @@ To properly set the owner of /etc/openvswitch/system-id.confSRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-84085-0
+ 1.1.12
+ etcd is a highly-available key-value store used by Kubernetes deployments for
+persistent storage of all of its REST API objects. This data directory should
+be protected from any unauthorized reads or writes.
+
+ CCE-83354-1
-
+
-
+
-
- Verify User Who Owns The Open vSwitch Daemon PID File
+
+ Verify Group Who Owns The Etcd Write-Ahead-Log Files
-To properly set the owner of /run/openvswitch/ovs-vswitchd.pid, run the command:
-$ sudo chown openvswitch /run/openvswitch/ovs-vswitchd.pid
+To properly set the group owner of /var/lib/etcd/member/wal/*, run the command:
+$ sudo chgrp root /var/lib/etcd/member/wal/*
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -12762,25 +14794,25 @@ To properly set the owner of /run/openvswitch/ovs-vswitchd.pidSRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83888-8
+ 1.1.12
+ etcd is a highly-available key-value store used by Kubernetes deployments for
+persistent storage of all of its REST API objects. This data directory should
+be protected from any unauthorized reads or writes.
+
+ CCE-83816-9
-
+
-
+
-
- Verify User Who Owns The Open vSwitch Database Server PID
-
-To properly set the owner of /run/openvswitch/ovsdb-server.pid, run the command:
-$ sudo chown openvswitch /run/openvswitch/ovsdb-server.pid
+
+ Verify Group Who Owns The etcd Member Pod Specification File
+ To properly set the group owner of /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml, run the command: $ sudo chgrp root /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -12789,26 +14821,28 @@ To properly set the owner of /run/openvswitch/ovsdb-server.pidSRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83806-0
+ 1.1.8
+ The etcd pod specification file controls various parameters that
+set the behavior of the etcd service in the master node. etcd is a
+highly-available key-value store which Kubernetes uses for persistent
+storage of all of its REST API object. You should restrict its file
+permissions to maintain the integrity of the file. The file should be
+writable by only the administrators on the system.
+
+ CCE-83664-3
-
+
-
+
-
- Verify User Who Owns The Kubernetes Scheduler Kubeconfig File
+
+ Verify Group Who Owns The Etcd PKI Certificate Files
-To properly set the owner of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/configmaps/scheduler-kubeconfig/kubeconfig, run the command:
-$ sudo chown root /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/configmaps/scheduler-kubeconfig/kubeconfig
- This rule is only applicable for nodes that run the Kubernetes Scheduler service.
+To properly set the group owner of /etc/kubernetes/static-pod-resources/*/*/*/*.crt, run the command:
+$ sudo chgrp root /etc/kubernetes/static-pod-resources/*/*/*/*.crt
+ This rule is only applicable for nodes that run the Etcd service.
The aforementioned service is only running on the nodes labeled
"master" by default.
CIP-003-8 R6
@@ -12819,38 +14853,23 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.16
- The kubeconfig for the Scheduler contains paramters for the scheduler
-to access the Kube API.
-You should set its file ownership to maintain the integrity of the file.
+ 1.1.19
+ OpenShift makes use of a number of certificates as part of its operation.
+You should verify the ownership of the directory containing the PKI
+information and all files in that directory to maintain their integrity.
+The directory and files should be owned by the system administrator.
- CCE-84017-3
-
-
-
-
-
-
-
-
- Verify User Who Owns The OpenShift etcd Data Directory
- To properly set the owner of /var/lib/etcd, run the command: $ sudo chown root /var/lib/etcd
- 1.1.12
- The /var/lib/etcd directory contains highly-avaliable distributed key/value data storage
-across an OpenShift cluster. Allowing access to users to this directory could compromise OpenShift
-data and the cluster.
+ CCE-83890-4
-
+
-
+
-
- Verify Permissions on the OpenShift Container Network Interface Files
-
-To properly set the permissions of /etc/cni/net.d/*, run the command:
-$ sudo chmod 0644 /etc/cni/net.d/*
+
+ Verify Group Who Owns The OpenShift SDN Container Network Interface Plugin IP Address Allocations
+ To properly set the group owner of /var/lib/cni/networks/openshift-sdn/.*, run the command: $ sudo chgrp root /var/lib/cni/networks/openshift-sdn/.*
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -12859,26 +14878,24 @@ To properly set the permissions of /etc/cni/net.d/*, run
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
+ 1.1.10
CNI (Container Network Interface) files consist of a specification and libraries for
writing plugins to configure network interfaces in Linux containers, along with a number
of supported plugins. Allowing writeable access to the files could allow an attacker to modify
the networking configuration potentially adding a rogue network connection.
-
- CCE-83379-8
+
+ CCE-84211-2
-
+
-
+
-
- Verify Permissions on the OpenShift Controller Manager Kubeconfig File
-
-To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig, run the command:
-$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig
- This rule is only applicable for nodes that run the Kubernetes Controller Manager service.
+
+ Verify Group Who Owns The Kubernetes API Server Pod Specification File
+ To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml, run the command: $ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml
+ This rule is only applicable for nodes that run the Kubernetes API Server service.
The aforementioned service is only running on the nodes labeled
"master" by default.
CIP-003-8 R6
@@ -12889,26 +14906,23 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.17
- The Controller Manager's kubeconfig contains information about how the
-component will access the API server. You should restrict its file
-permissions to maintain the integrity of the file. The file should be
-writable by only the administrators on the system.
+ 1.1.2
+ The Kubernetes specification file contains information about the configuration of the
+Kubernetes API Server that is configured on the system. Protection of this file is
+critical for OpenShift security.
- CCE-83604-9
+ CCE-83530-6
-
+
-
+
-
- Verify Permissions on the Etcd Database Directory
-
-To properly set the permissions of /var/lib/etcd, run the command:
-$ sudo chmod 0700 /var/lib/etcd
- This rule is only applicable for nodes that run the Etcd service.
+
+ Verify Group Who Owns The Kubernetes Controller Manager Pod Specification File
+ To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml, run the command: $ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml
+ This rule is only applicable for nodes that run the Kubernetes Controller Manager service.
The aforementioned service is only running on the nodes labeled
"master" by default.
CIP-003-8 R6
@@ -12919,26 +14933,23 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.11
- etcd is a highly-available key-value store used by Kubernetes deployments for persistent
-storage of all of its REST API objects. This data directory should be protected from any
-unauthorized reads or writes. It should not be readable or writable by any group members
-or the world.
+ 1.1.4
+ The Kubernetes specification file contains information about the configuration of the
+Kubernetes Controller Manager Server that is configured on the system. Protection of this file is
+critical for OpenShift security.
- CCE-84013-2
+ CCE-83953-0
-
+
-
+
-
- Verify Permissions on the Etcd Write-Ahead-Log Files
-
-To properly set the permissions of /var/lib/etcd/member/wal/*, run the command:
-$ sudo chmod 0600 /var/lib/etcd/member/wal/*
- This rule is only applicable for nodes that run the Etcd service.
+
+ Verify Group Who Owns The Kubernetes Scheduler Pod Specification File
+ To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml, run the command: $ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml
+ This rule is only applicable for nodes that run the Kubernetes Scheduler service.
The aforementioned service is only running on the nodes labeled
"master" by default.
CIP-003-8 R6
@@ -12949,26 +14960,36 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.11
- etcd is a highly-available key-value store used by Kubernetes deployments for persistent
-storage of all of its REST API objects. This data directory should be protected from any
-unauthorized reads or writes. It should not be readable or writable by any group members
-or the world.
+ 1.1.6
+ The Kubernetes Specification file contains information about the configuration of the
+Kubernetes scheduler that is configured on the system. Protection of this file is
+critical for OpenShift security.
- CCE-83382-2
+ CCE-83614-8
-
+
-
+
-
- Verify Permissions on the Etcd Member Pod Specification File
+
+ Verify Group Who Owns The OpenShift Admin Kubeconfig File
+ To properly set the group owner of /etc/kubernetes/kubeconfig, run the command: $ sudo chgrp root /etc/kubernetes/kubeconfig
+ 1.1.14
+ The /etc/kubernetes/kubeconfig file contains information about the administrative configuration of the
+OpenShift cluster that is configured on the system. Protection of this file is
+critical for OpenShift security.
+
+
+
+
+
+ Verify Group Who Owns The OpenShift Admin Kubeconfig Files
-To properly set the permissions of /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml, run the command:
-$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml
- This rule is only applicable for nodes that run the Etcd service.
+To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig, run the command:
+$ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig
+ This rule is only applicable for nodes that run the Kubernetes API server service.
The aforementioned service is only running on the nodes labeled
"master" by default.
CIP-003-8 R6
@@ -12979,30 +15000,26 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.7
- The etcd pod specification file controls various parameters that
-set the behavior of the etcd service in the master node. etcd is a
-highly-available key-value store which Kubernetes uses for persistent
-storage of all of its REST API object. You should restrict its file
-permissions to maintain the integrity of the file. The file should be
-writable by only the administrators on the system.
+ 1.1.14
+ There are various kubeconfig files that can be used by the administrator,
+defining various settings for the administration of the cluster. These files
+contain credentials that can be used to control the cluster and are needed
+for disaster recovery and each kubeconfig points to a different endpoint in
+the cluster. You should restrict its file permissions to maintain the
+integrity of the kubeconfig file as an attacker who gains access to these
+files can take over the cluster.
- CCE-83973-8
+ CCE-84204-7
-
+
-
+
-
- Verify Permissions on the Etcd PKI Certificate Files
-
-To properly set the permissions of /etc/kubernetes/static-pod-resources/etcd-*/secrets/*/*.crt, run the command:
-$ sudo chmod 0600 /etc/kubernetes/static-pod-resources/etcd-*/secrets/*/*.crt
- This rule is only applicable for nodes that run the Etcd service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+
+ Verify Group Who Owns The OpenShift Multus Container Network Interface Plugin Files
+ To properly set the group owner of /var/run/multus/cni/net.d/*, run the command: $ sudo chgrp root /var/run/multus/cni/net.d/*
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13011,24 +15028,28 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.20
- OpenShift makes use of a number of certificate files as part of the operation
-of its components. The permissions on these files should be set to
-600 or more restrictive to protect their integrity.
-
- CCE-83362-4
+ 1.1.10
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83818-5
-
+
-
+
-
- Verify Permissions on the OpenShift SDN Container Network Interface Plugin IP Address Allocations
+
+ Verify Group Who Owns The OpenShift PKI Certificate Files
-To properly set the permissions of /var/lib/cni/networks/openshift-sdn/*, run the command:
-$ sudo chmod 0644 /var/lib/cni/networks/openshift-sdn/*
+To properly set the group owner of /etc/kubernetes/static-pod-resources/*/*/*/tls.crt, run the command:
+$ sudo chgrp root /etc/kubernetes/static-pod-resources/*/*/*/tls.crt
+ This rule is only applicable for nodes that run the Kubernetes Control Plane.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13037,26 +15058,26 @@ To properly set the permissions of /var/lib/cni/networks/openshift-sd
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
- CNI (Container Network Interface) files consist of a specification and libraries for
-writing plugins to configure network interfaces in Linux containers, along with a number
-of supported plugins. Allowing writeable access to the files could allow an attacker to modify
-the networking configuration potentially adding a rogue network connection.
-
- CCE-83469-7
+ 1.1.19
+ OpenShift makes use of a number of certificates as part of its operation.
+You should verify the ownership of the directory containing the PKI
+information and all files in that directory to maintain their integrity.
+The directory and files should be owned by the system administrator.
+
+ CCE-83922-5
-
+
-
+
-
- Verify Permissions on the Kubernetes API Server Pod Specification File
+
+ Verify Group Who Owns The OpenShift PKI Private Key Files
-To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml, run the command:
-$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml
- This rule is only applicable for nodes that run the Kubernetes API Server service.
+To properly set the group owner of /etc/kubernetes/static-pod-resources/*/*/*/*.key, run the command:
+$ sudo chgrp root /etc/kubernetes/static-pod-resources/*/*/*/*.key
+ This rule is only applicable for nodes that run the Kubernetes Control Plane.
The aforementioned service is only running on the nodes labeled
"master" by default.
CIP-003-8 R6
@@ -13067,28 +15088,25 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.1
- If the Kubernetes specification file is writable by a group-owner or the
-world the risk of its compromise is increased. The file contains the configuration of
-the Kubernetes API server that is configured on the system. Protection of this file is
-critical for OpenShift security.
+ 1.1.19
+ OpenShift makes use of a number of certificates as part of its operation.
+You should verify the ownership of the directory containing the PKI
+information and all files in that directory to maintain their integrity.
+The directory and files should be owned by root:root.
- CCE-83983-7
+ CCE-84172-6
-
+
-
+
-
- Verify Permissions on the Kubernetes Controller Manager Pod Specificiation File
+
+ Verify Group Who Owns The OpenShift SDN CNI Server Config
-To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml, run the command:
-$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml
- This rule is only applicable for nodes that run the Kubernetes Controller Manager service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+To properly set the group owner of /var/run/openshift-sdn/cniserver/config.json, run the command:
+$ sudo chgrp root /var/run/openshift-sdn/cniserver/config.json
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13097,56 +15115,40 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.3
- If the Kubernetes specification file is writable by a group-owner or the
-world the risk of its compromise is increased. The file contains the configuration of
-an Kubernetes Controller Manager server that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
- CCE-84161-9
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83605-6
-
+
-
+
-
- Verify Permissions on the Kube Scheduler Pod Specification File
-
-To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-scheduler-pod.yaml, run the command:
-$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-scheduler-pod.yaml
- 1.1.5
- If the Kube specification file is writable by a group-owner or the
-world the risk of its compromise is increased. The file contains the configuration of
-an OpenShift scheduler that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
-
+
+ Verify Group Who Owns The OpenShift Open vSwitch Files
+ To properly set the group owner of /etc/openvswitch/.*, run the command: $ sudo chgrp root /etc/openvswitch/.*
+ 1.1.10
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+
-
-
- Verify Permissions on the OpenShift Admin Kubeconfig File
-
-To properly set the permissions of /etc/kubernetes/kubeconfig, run the command:
-$ sudo chmod 0600 /etc/kubernetes/kubeconfig
- 1.1.13
- If the /etc/kubernetes/kubeconfig file is writable by a group-owner or the
-world the risk of its compromise is increased. The file contains the administration configuration of the
-OpenShift cluster that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
+
-
- Verify Permissions on the OpenShift Admin Kubeconfig Files
-
-To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig, run the command:
-$ sudo chmod 0600 /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig
- This rule is only applicable for nodes that run the Kubernetes Control Plane.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+
+ Verify Group Who Owns The Open vSwitch Configuration Database
+ Check if the group owner of /etc/openvswitch/conf.db is
+hugetlbfs on architectures other than s390x or openvswitch
+on s390x.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13155,28 +15157,25 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.13
- There are various kubeconfig files that can be used by the administrator,
-defining various settings for the administration of the cluster. These files
-contain credentials that can be used to control the cluster and are needed
-for disaster recovery and each kubeconfig points to a different endpoint in
-the cluster. You should restrict its file permissions to maintain the
-integrity of the kubeconfig file as an attacker who gains access to these
-files can take over the cluster.
-
- CCE-84278-1
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-88281-1
-
+
-
+
-
- Verify Permissions on the OpenShift Multus Container Network Interface Plugin Files
-
-To properly set the permissions of /var/run/multus/cni/net.d/*, run the command:
-$ sudo chmod 0644 /var/run/multus/cni/net.d/*
+
+ Verify Group Who Owns The Open vSwitch Configuration Database Lock
+ Check if the group owner of /etc/openvswitch/conf.db.~lock~ is
+hugetlbfs on architectures other than s390x or openvswitch
+on s390x.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13191,22 +15190,19 @@ writing plugins to configure network interfaces in Linux containers, along with
of supported plugins. Allowing writeable access to the files could allow an attacker to modify
the networking configuration potentially adding a rogue network connection.
- CCE-83467-1
+ CCE-90793-1
-
+
-
+
-
- Verify Permissions on the OpenShift PKI Certificate Files
+
+ Verify Group Who Owns The Open vSwitch Configuration Database Lock
-To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-*/secrets/*/tls.crt, run the command:
-$ sudo chmod 0600 /etc/kubernetes/static-pod-resources/kube-*/secrets/*/tls.crt
- This rule is only applicable for nodes that run the Kubernetes Control Plane.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+To properly set the group owner of /etc/openvswitch/.conf.db.~lock~, run the command:
+$ sudo chgrp hugetlbfs /etc/openvswitch/.conf.db.~lock~
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13215,83 +15211,79 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.20
- OpenShift makes use of a number of certificate files as part of the operation
-of its components. The permissions on these files should be set to
-600 or more restrictive to protect their integrity.
-
- CCE-83552-0
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-84219-5
-
+
-
+
-
- Verify Permissions on the OpenShift PKI Private Key Files
+
+ Verify Group Who Owns The Open vSwitch Configuration Database Lock
-To properly set the permissions of /etc/kubernetes/static-pod-resources/*/*/*/*.key, run the command:
-$ sudo chmod 0600 /etc/kubernetes/static-pod-resources/*/*/*/*.key
- This rule is only applicable for nodes that run the Kubernetes Control Plane.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
- CIP-003-8 R1.3
- CIP-003-8 R3
- CIP-003-8 R3.1
- CIP-003-8 R3.2
- CIP-003-8 R3.3
- CIP-003-8 R5.1.1
- CIP-003-8 R5.3
- CIP-004-6 R2.2.3
- CIP-004-6 R2.3
- CIP-007-3 R5.1
- CIP-007-3 R5.1.2
- CIP-007-3 R5.2
- CIP-007-3 R5.3.1
- CIP-007-3 R5.3.2
- CIP-007-3 R5.3.3
+To properly set the group owner of /etc/openvswitch/.conf.db.~lock~, run the command:
+$ sudo chgrp hugetlbfs /etc/openvswitch/.conf.db.~lock~
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
CM-6
CM-6(1)
- IA-5(2)
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.21
- OpenShift makes use of a number of key files as part of the operation of its
-components. The permissions on these files should be set to 600
-to protect their integrity and confidentiality.
-
- CCE-83580-1
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-85936-3
-
+
-
+
-
- Verify Permissions on the OpenShift Open vSwitch Files
+
+ Verify Group Who Owns The Open vSwitch Configuration Database
-To properly set the permissions of /etc/openvswitch/.*, run the command:
-$ sudo chmod 0644 /etc/openvswitch/.*
- 1.4.9
+To properly set the group owner of /etc/openvswitch/conf.db, run the command:
+$ sudo chgrp hugetlbfs /etc/openvswitch/conf.db
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.1.9
CNI (Container Network Interface) files consist of a specification and libraries for
writing plugins to configure network interfaces in Linux containers, along with a number
of supported plugins. Allowing writeable access to the files could allow an attacker to modify
the networking configuration potentially adding a rogue network connection.
+
+ CCE-84226-0
-
+
-
+
-
- Verify Permissions on the Open vSwitch Configuration Database
+
+ Verify Group Who Owns The Open vSwitch Configuration Database
-To properly set the permissions of /etc/openvswitch/conf.db, run the command:
-$ sudo chmod 0640 /etc/openvswitch/conf.db
+To properly set the group owner of /etc/openvswitch/conf.db, run the command:
+$ sudo chgrp openvswitch /etc/openvswitch/conf.db
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13305,20 +15297,20 @@ To properly set the permissions of /etc/openvswitch/conf.db
-
- CCE-83788-0
+
+ CCE-85927-2
-
+
-
+
-
- Verify Permissions on the Open vSwitch Configuration Database Lock
-
-To properly set the permissions of /etc/openvswitch/.conf.db.~lock~, run the command:
-$ sudo chmod 0600 /etc/openvswitch/.conf.db.~lock~
+
+ Verify Group Who Owns The Open vSwitch Process ID File
+ Ensure that the file /var/run/openvswitch/ovs-vswitchd.pid,
+is owned by the group openvswitch or hugetlbfs,
+depending on your settings and Open vSwitch version.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13333,19 +15325,19 @@ writing plugins to configure network interfaces in Linux containers, along with
of supported plugins. Allowing writeable access to the files could allow an attacker to modify
the networking configuration potentially adding a rogue network connection.
- CCE-84202-1
+ CCE-83630-4
-
+
-
+
-
- Verify Permissions on the Open vSwitch Process ID File
-
-To properly set the permissions of /var/run/openvswitch/ovs-vswitchd.pid, run the command:
-$ sudo chmod 0644 /var/run/openvswitch/ovs-vswitchd.pid
+
+ Verify Group Who Owns The Open vSwitch Persistent System ID
+ Check if the group owner of /etc/openvswitch/system-id.conf is
+hugetlbfs on architectures other than s390x or openvswitch
+on x390x.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13360,19 +15352,19 @@ writing plugins to configure network interfaces in Linux containers, along with
of supported plugins. Allowing writeable access to the files could allow an attacker to modify
the networking configuration potentially adding a rogue network connection.
- CCE-83666-8
+ CCE-85892-8
-
+
-
+
-
- Verify Permissions on the Open vSwitch Persistent System ID
+
+ Verify Group Who Owns The Open vSwitch Persistent System ID
-To properly set the permissions of /etc/openvswitch/system-id.conf, run the command:
-$ sudo chmod 0644 /etc/openvswitch/system-id.conf
+To properly set the group owner of /etc/openvswitch/system-id.conf, run the command:
+$ sudo chgrp hugetlbfs /etc/openvswitch/system-id.conf
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13386,20 +15378,20 @@ To properly set the permissions of /etc/openvswitch/system-id.conf
-
- CCE-83400-2
+
+ CCE-83677-5
-
+
-
+
-
- Verify Permissions on the Open vSwitch Daemon PID File
+
+ Verify Group Who Owns The Open vSwitch Persistent System ID
-To properly set the permissions of /run/openvswitch/ovs-vswitchd.pid, run the command:
-$ sudo chmod 0644 /run/openvswitch/ovs-vswitchd.pid
+To properly set the group owner of /etc/openvswitch/system-id.conf, run the command:
+$ sudo chgrp hugetlbfs /etc/openvswitch/system-id.conf
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13413,20 +15405,20 @@ To properly set the permissions of /run/openvswitch/ovs-vswitchd.pid<
writing plugins to configure network interfaces in Linux containers, along with a number
of supported plugins. Allowing writeable access to the files could allow an attacker to modify
the networking configuration potentially adding a rogue network connection.
-
- CCE-83710-4
+
+ CCE-85928-0
-
+
-
+
-
- Verify Permissions on the Open vSwitch Database Server PID
-
-To properly set the permissions of /run/openvswitch/ovsdb-server.pid, run the command:
-$ sudo chmod 0644 /run/openvswitch/ovsdb-server.pid
+
+ Verify Group Who Owns The Open vSwitch Daemon PID File
+ Ensure that the file /run/openvswitch/ovs-vswitchd.pid,
+is owned by the group openvswitch or hugetlbfs,
+depending on your settings and Open vSwitch version.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13441,22 +15433,19 @@ writing plugins to configure network interfaces in Linux containers, along with
of supported plugins. Allowing writeable access to the files could allow an attacker to modify
the networking configuration potentially adding a rogue network connection.
- CCE-83679-1
+ CCE-84129-6
-
+
-
+
-
- Verify Permissions on the Kubernetes Scheduler Pod Specification File
-
-To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml, run the command:
-$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml
- This rule is only applicable for nodes that run the Kubernetes Scheduler service.
-The aforementioned service is only running on the nodes labeled
-"master" by default.
+
+ Verify Group Who Owns The Open vSwitch Database Server PID
+ Ensure that the file /run/openvswitch/ovsdb-server.pid,
+is owned by the group openvswitch or hugetlbfs,
+depending on your settings and Open vSwitch version.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13465,25 +15454,25 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.5
- If the Kubernetes specification file is writable by a group-owner or the
-world the risk of its compromise is increased. The file contains the configuration of
-an Kubernetes Scheduler service that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
- CCE-84057-9
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-84166-8
-
+
-
+
-
- Verify Permissions on the Kubernetes Scheduler Kubeconfig File
+
+ Verify Group Who Owns The Kubernetes Scheduler Kubeconfig File
-To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/configmaps/scheduler-kubeconfig/kubeconfig, run the command:
-$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/configmaps/scheduler-kubeconfig/kubeconfig
+To properly set the group owner of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/configmaps/scheduler-kubeconfig/kubeconfig, run the command:
+$ sudo chgrp root /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/configmaps/scheduler-kubeconfig/kubeconfig
This rule is only applicable for nodes that run the Kubernetes Scheduler service.
The aforementioned service is only running on the nodes labeled
"master" by default.
@@ -13495,41 +15484,22 @@ The aforementioned service is only running on the nodes labeled
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.15
+ 1.1.16
The kubeconfig for the Scheduler contains paramters for the scheduler
-to access the Kube API. You should restrict its file permissions to maintain
-the integrity of the file. The file should be writable by only the
-administrators on the system.
+to access the Kube API.
+You should set its file ownership to maintain the integrity of the file.
- CCE-83772-4
-
-
-
-
-
-
-
-
- The OpenShift etcd Data Directory Must Have Mode 0700
-
-To properly set the permissions of /var/lib/etcd, run the command:
-$ sudo chmod 0700 /var/lib/etcd
- 1.1.11
- The /var/lib/etcd directory contains highly-avaliable distributed key/value data storage
-across an OpenShift cluster. Allowing access to users to this directory could compromise OpenShift
-data and the cluster.
+ CCE-83471-3
-
+
-
+
-
- Verify Permissions on the OpenShift SDN CNI Server Config
-
-To properly set the permissions of /var/run/openshift-sdn/cniserver/config.json, run the command:
-$ sudo chmod 0444 /var/run/openshift-sdn/cniserver/config.json
+
+ Verify User Who Owns The OpenShift Container Network Interface Files
+ To properly set the owner of /etc/cni/net.d/*, run the command: $ sudo chown root /etc/cni/net.d/*
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -13538,1115 +15508,831 @@ To properly set the permissions of /var/run/openshift-sdn/cniserver/c
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.1.9
+ 1.1.10
CNI (Container Network Interface) files consist of a specification and libraries for
writing plugins to configure network interfaces in Linux containers, along with a number
of supported plugins. Allowing writeable access to the files could allow an attacker to modify
the networking configuration potentially adding a rogue network connection.
-
- CCE-83927-4
+
+ CCE-83460-6
-
+
-
+
-
-
- Kubernetes - Network Configuration and Firewalls
- Most systems must be connected to a network of some
-sort, and this brings with it the substantial risk of network
-attack. This section discusses the security impact of decisions
-about networking which must be made when configuring a system.
-
-This section also discusses firewalls, network access
-controls, and other network security frameworks, which allow
-system-level rules to be written that can limit an attackers' ability
-to connect to your system. These rules can specify that network
-traffic should be allowed or denied from certain IP addresses,
-hosts, and networks. The rules can also specify which of the
-system's network services are available to particular hosts or
-networks.
-
- Ensure that cluster-wide proxy is set
+
+ Verify User Who Owns The OpenShift Controller Manager Kubeconfig File
-
-Production environments can deny direct access to the Internet and instead have
-an HTTP or HTTPS proxy available.
-
-
-The Proxy object is used to manage the cluster-wide egress proxy. Setting this
-will ensure that containers get the appropriate environment variables set
-to ensure traffic goes to the proxy per organizational requirements.
-
-
-For more information, see the relevant documentation.
-
-
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/proxies/cluster API endpoint to the local /apis/config.openshift.io/v1/proxies/cluster file.
- CIP-004-6 R2.2.4
+To properly set the owner of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig, run the command:
+$ sudo chown root /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig
+ This rule is only applicable for nodes that run the Kubernetes Controller Manager service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
+ CIP-003-8 R6
CIP-004-6 R3
- CIP-007-3 R5.1
CIP-007-3 R6.1
- SC-7(8)
- External networks tend to be outside of organizational control. By ensuring
-that egress traffic goes through an authorized proxy, one is able to ensure
-that expected and safe traffic is coming out, and malicious actors
-aren't leaking sensitive information, or calling back from a central command
-center to get further instructions upon intrusion.
- CCE-90765-9
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.1.18
+ The Controller Manager's kubeconfig contains information about how the
+component will access the API server. You should set its file ownership to
+maintain the integrity of the file.
+
+ CCE-83904-3
-
-
+
+
+
+
-
- Ensure that the CNI in use supports Network Policies
- There are a variety of CNI plugins available for Kubernetes. If the CNI in
-use does not support Network Policies it may not be possible to effectively
-restrict traffic in the cluster. OpenShift supports Kubernetes NetworkPolicy
-using a Kubernetes Container Network Interface (CNI) plug-in.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/operator.openshift.io/v1/networks/cluster
- API endpoint, filter with with the jq utility using the following filter
- [.spec.defaultNetwork.type]
- and persist it to the local
- /apis/operator.openshift.io/v1/networks/cluster#35e33d6dc1252a03495b35bd1751cac70041a511fa4d282c300a8b83b83e3498
- file.
-
+
+ Verify User Who Owns The Etcd Database Directory
+
+To properly set the owner of /var/lib/etcd/member/, run the command:
+$ sudo chown root /var/lib/etcd/member/
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-1.1.4
- Req-1.2
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- SRG-APP-000038-CTR-000105
- SRG-APP-000039-CTR-000110
- 5.3.1
- Kubernetes network policies are enforced by the CNI plugin in use. As such
-it is important to ensure that the CNI plugin supports both Ingress and
-Egress network policies.
+ 1.1.12
+ etcd is a highly-available key-value store used by Kubernetes deployments for
+persistent storage of all of its REST API objects. This data directory should
+be protected from any unauthorized reads or writes.
+
+ CCE-83905-0
-
-
+
-
+
-
- Ensure that application Namespaces have Network Policies defined.
- Use network policies to isolate traffic in your cluster network.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/networking.k8s.io/v1/networkpolicies
- API endpoint, filter with with the jq utility using the following filter
- [.items[] | select((.metadata.namespace | startswith("openshift") | not) and (.metadata.namespace | startswith("kube-") | not) and .metadata.namespace != "default") | .metadata.namespace] | unique
- and persist it to the local
- /apis/networking.k8s.io/v1/networkpolicies#51742b3e87275db9eb7fc6c0286a9e536178a2a83e3670b615ceaf545e7fd300
- file.
- /api/v1/namespaces
- API endpoint, filter with with the jq utility using the following filter
- [.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default")]
- and persist it to the local
- /api/v1/namespaces#34d4beecc95c65d815d9d48fd4fdcb0c521631852ad088ef74e36d012b0e1e0d
- file.
-
- CIP-003-8 R4
- CIP-003-8 R4.2
- CIP-003-8 R5
+
+ Verify User Who Owns The Etcd Write-Ahead-Log Files
+
+To properly set the owner of /var/lib/etcd/member/wal/*, run the command:
+$ sudo chown root /var/lib/etcd/member/wal/*
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
- CIP-004-6 R2.2.4
CIP-004-6 R3
- CIP-007-3 R2
- CIP-007-3 R2.1
- CIP-007-3 R2.2
- CIP-007-3 R2.3
- CIP-007-3 R5.1
CIP-007-3 R6.1
- AC-4
- AC-4(21)
- CA-3(5)
CM-6
CM-6(1)
- CM-7
- CM-7(1)
- SC-7
- SC-7(3)
- SC-7(5)
- SC-7(8)
- SC-7(12)
- SC-7(13)
- SC-7(18)
- SC-7(10)
- SI-4(22)
- Req-1.1.4
- Req-1.2
- Req-1.2.1
- Req-1.3.1
- Req-1.3.2
- Req-2.2
- SRG-APP-000038-CTR-000105
- SRG-APP-000039-CTR-000110
- SRG-APP-000141-CTR-000315
- SRG-APP-000141-CTR-000320
- SRG-APP-000142-CTR-000325
- SRG-APP-000142-CTR-000330
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- SRG-APP-000645-CTR-001410
- 5.3.2
- Running different applications on the same Kubernetes cluster creates a risk of one
-compromised application attacking a neighboring application. Network segmentation is
-important to ensure that containers can communicate only with those they are supposed
-to. When a network policy is introduced to a given namespace, all traffic not allowed
-by the policy is denied. However, if there are no network policies in a namespace all
-traffic will be allowed into and out of the pods in that namespace.
+ 1.1.12
+ etcd is a highly-available key-value store used by Kubernetes deployments for
+persistent storage of all of its REST API objects. This data directory should
+be protected from any unauthorized reads or writes.
+
+ CCE-84010-8
-
-
+
-
+
-
- Ensure that the default Ingress CA (wildcard issuer) has been replaced
- Check that the default Ingress CA has been replaced.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/proxies/cluster API endpoint to the local /apis/config.openshift.io/v1/proxies/cluster file.
- CIP-007-3 R5.1
- SC-17
- OpenShift auto-generates several PKIs to serve TLS on different
-endpoints of the system. It is possible and necessary to configure a
-custom PKI which allows external clients to trust the endpoints.
-
-The Ingress Operator is the component responsible for enabling external
-access to OpenShift Container Platform cluster services. The aforementioned
-operator creates an internal CA and issues a wildcard certificate that is
-valid for applications under the .apps sub-domain. Both the web console
-and CLI use this certificate as well. The certificate and key would need
-to be replaced since a certificate coming from a trusted provider is
-needed.
-
-
- https://docs.openshift.com/container-platform/latest/security/certificates/replacing-default-ingress-certificate.html
+
+ Verify User Who Owns The Etcd Member Pod Specification File
+ To properly set the owner of /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml, run the command: $ sudo chown root /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.1.8
+ The etcd pod specification file controls various parameters that
+set the behavior of the etcd service in the master node. etcd is a
+highly-available key-value store which Kubernetes uses for persistent
+storage of all of its REST API object. You should restrict its file
+permissions to maintain the integrity of the file. The file should be
+writable by only the administrators on the system.
+
+ CCE-83988-6
-
-
+
-
+
-
- Ensure that the default Ingress certificate has been replaced
- Check that the default Ingress certificate has been replaced.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default API endpoint to the local /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default file.
- SC-12
- OpenShift auto-generates several PKIs to serve TLS on different
-endpoints of the system. It is possible and necessary to configure a
-custom PKI which allows external clients to trust the endpoints.
-
-The Ingress Operator is the component responsible for enabling external
-access to OpenShift Container Platform cluster services. The aforementioned
-operator creates an internal CA and issues a wildcard certificate that is
-valid for applications under the .apps sub-domain. Both the web console
-and CLI use this certificate as well. The certificate and key would need
-to be replaced since a certificate coming from a trusted provider is
-needed.
-
-
- https://docs.openshift.com/container-platform/latest/security/certificates/replacing-default-ingress-certificate.html
+
+ Verify User Who Owns The Etcd PKI Certificate Files
+
+To properly set the owner of /etc/kubernetes/static-pod-resources/*/*/*/*.crt, run the command:
+$ sudo chown root /etc/kubernetes/static-pod-resources/*/*/*/*.crt
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.1.19
+ OpenShift makes use of a number of certificates as part of its operation.
+You should verify the ownership of the directory containing the PKI
+information and all files in that directory to maintain their integrity.
+The directory and files should be owned by the system administrator.
+
+ CCE-83898-7
-
-
+
-
+
-
- Ensure that all Routes has rate limit enabled
- OpenShift has an option to set the IP whitelist for Routes [1] when
-creating new Routes. All routes outside the openshift namespaces and
-the kube namespaces should use the IP whitelist annotations. Requests
-from IP addresses that are not in the whitelist are dropped.
-
-[1] https://docs.openshift.com/container-platform/latest/networking/routes/route-configuration.html#nw-route-specific-annotations_route-configuration
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/route.openshift.io/v1/routes?limit=500
- API endpoint, filter with with the jq utility using the following filter
- [.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.annotations["haproxy.router.openshift.io/ip_whitelist"] | not) | .metadata.name]
- and persist it to the local
- /apis/route.openshift.io/v1/routes?limit=500#aec152a4446d7917fcbebee892a2ec3fbdef3b71cc0784c9457b2e54fd64dd3b
- file.
-
- SC-7(5)
- The usage of IP whitelist for Routes provides basic protection against unwanted access.
- CCE-90596-8
+
+ Verify User Who Owns The OpenShift SDN Container Network Interface Plugin IP Address Allocations
+ To properly set the owner of /var/lib/cni/networks/openshift-sdn/.*, run the command: $ sudo chown root /var/lib/cni/networks/openshift-sdn/.*
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.1.10
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-84248-4
-
-
+
-
+
-
- Ensure that all OpenShift Routes prefer TLS
- OpenShift Container Platform provides methods for communicating from
-outside the cluster with services running in the cluster. TLS must
-be used to protect these communications. OpenShift
-Routes provide the ability to configure the needed TLS settings. With
-these, one is able to configure that any request coming from the outside
-must use TLS. To verify this, ensure that every Route in the system
-has a policy of Disable or Redirect to ensure a
-secure endpoint is used. The aforementioned policy will be set in
-a Routes .spec.tls.insecureEdgeTerminationPolicy setting.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/route.openshift.io/v1/routes?limit=500 API endpoint to the local /apis/route.openshift.io/v1/routes?limit=500 file.
- CIP-003-8 R4
- CIP-003-8 R4.2
- CIP-003-8 R5
+
+ Verify User Who Owns The Kubernetes API Server Pod Specification File
+ To properly set the owner of /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml, run the command: $ sudo chown root /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml
+ This rule is only applicable for nodes that run the Kubernetes API Server service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
+ CIP-003-8 R6
CIP-004-6 R3
- CIP-007-3 R5.1
- CIP-007-3 R7.1
- AC-4
- AC-4(21)
- AC-17(3)
- SC-8
- SC-8(1)
- SC-8(2)
- SI-4
- Req-6.5.4
- SRG-APP-000038-CTR-000105
- SRG-APP-000039-CTR-000110
- SRG-APP-000441-CTR-001090
- SRG-APP-000442-CTR-001095
- Using clear-text in communications coming to or from outside
-the cluster's network may leak sensitive information.
- CCE-84225-2
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.1.2
+ The Kubernetes specification file contains information about the configuration of the
+Kubernetes API Server that is configured on the system. Protection of this file is
+critical for OpenShift security.
+
+ CCE-83372-3
-
-
+
-
+
-
- Ensure that all Routes has rate limit enabled
- OpenShift has an option to set the rate limit for Routes [1] when creating new Routes.
-All routes outside the openshift namespaces and the kube namespaces should use the
-rate-limiting annotations.
-
-[1] https://docs.openshift.com/container-platform/4.9/networking/routes/route-configuration.html#nw-route-specific-annotations_route-configuration
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/route.openshift.io/v1/routes?limit=500
- API endpoint, filter with with the jq utility using the following filter
- [.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.annotations["haproxy.router.openshift.io/rate-limit-connections"] == "true" | not) | .metadata.name]
- and persist it to the local
- /apis/route.openshift.io/v1/routes?limit=500#842fa6716f17342d62e70f2755db709b9d7a161cf0338ea8bfae9b06dab5e6cc
- file.
-
- SC-5
- SC-5(1)
- SC-5(2)
- SRG-APP-000246-CTR-000605
- SRG-APP-000435-CTR-001070
- The usage of rate limit for Routes provides basic protection against distributed denial-of-service (DDoS) attacks.
- CCE-90779-0
+
+ Verify User Who Owns The Kubernetes Controller Manager Pod Specificiation File
+ To properly set the owner of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml, run the command: $ sudo chown root /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml
+ This rule is only applicable for nodes that run the Kubernetes Controller Manager service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
+ CIP-003-8 R6
+ CIP-004-6 R3
+ CIP-007-3 R6.1
+ CM-6
+ CM-6(1)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.1.4
+ The Kubernetes specification file contains information about the configuration of the
+Kubernetes Controller Manager Server that is configured on the system. Protection of this file is
+critical for OpenShift security.
+
+ CCE-83795-5
-
-
+
-
+
-
-
- OpenShift API Server
- This section contains recommendations for openshift-apiserver configuration.
-
- Configure the Audit Log Path
- To enable auditing on the OpenShift API Server, the audit log path must be set.
-Edit the openshift-apiserver configmap
-and set the audit-log-path to a suitable path and file
-where audit logs should be written. For example:
-
-"apiServerArguments":{
- ...
- "audit-log-path":"/var/log/openshift-apiserver/audit.log",
- ...
-
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /api/v1/namespaces/openshift-apiserver/configmaps/config API endpoint to the local /api/v1/namespaces/openshift-apiserver/configmaps/config file.
+
+ Verify User Who Owns The Kubernetes Scheduler Pod Specification File
+ To properly set the owner of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml, run the command: $ sudo chown root /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml
+ This rule is only applicable for nodes that run the Kubernetes Scheduler service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.2.22
- Auditing of the API Server is not enabled by default. Auditing the API Server
-provides a security-relevant chronological set of records documenting the sequence
-of activities that have affected the system by users, administrators, or other
-system components.
- CCE-83547-0
+ 1.1.6
+ The Kubernetes specification file contains information about the configuration of the
+Kubernetes scheduler that is configured on the system. Protection of this file is
+critical for OpenShift security.
+
+ CCE-83393-9
-
-
+
-
+
-
-
- Role-based Acess Control
- Role-based access control (RBAC) objects determine
-whether a user is allowed to perform a given action
-within a project.
-
-Cluster administrators can use the cluster roles and
-bindings to control who has various access levels to
-the OpenShift Container Platform platform itself
-and all projects.
-
-Developers can use local roles and bindings to control
-who has access to their projects. Note that authorization
-is a separate step from authentication, which is more
-about determining the identity of who is taking the action.
-
- Ensure cluster roles are defined in the cluster
-
-
-RBAC is a critical feature in terms of security for Kubernetes and
-OpenShift. It enables administrators to segment the privileges
-granted to a service account, and thus allows us to limit the
-access to resources that they get. By defining cluster roles appropriately
-one is able to codify organizational policy. [1]
-
-
-[1]
- https://docs.openshift.com/container-platform/latest/authentication/using-rbac.html
-
-
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/clusterroles?limit=1000 API endpoint to the local /apis/rbac.authorization.k8s.io/v1/clusterroles?limit=1000 file.
- Req-7.1.1
- By defining RBAC cluster roles, one is able to limit the permissions
-given to a Service Account, and thus limit the blast radius
-that an account compromise would have.
- CCE-86595-6
-
-
-
+
+ Verify User Who Owns The OpenShift Admin Kubeconfig File
+ To properly set the owner of /etc/kubernetes/kubeconfig, run the command: $ sudo chown root /etc/kubernetes/kubeconfig
+ 1.1.14
+ The /etc/kubernetes/kubeconfig file contains information about the administrative configuration of the
+OpenShift cluster that is configured on the system. Protection of this file is
+critical for OpenShift security.
+
+
-
- Profiling is protected by RBAC
- Ensure that the cluster-debugger cluster role includes the /debug/pprof
-resource URL. This demonstrates that profiling is protected by RBAC, with a
-specific cluster role to allow access.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger API endpoint to the local /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger file.
+
+ Verify User Who Owns The OpenShift Admin Kubeconfig Files
+
+To properly set the owner of /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig, run the command:
+$ sudo chown root /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig
+ This rule is only applicable for nodes that run the Kubernetes Control Plane.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.3.2
- 1.4.1
- Profiling allows for the identification of specific performance bottlenecks.
-It generates a significant amount of program data that could potentially be
-exploited to uncover system and program details. If you are not experiencing
-any bottlenecks and do not need the profiler for troubleshooting purposes, it
-is recommended to turn it off to reduce the potential attack surface. To
-ensure the collected data is not exploited, profiling endpoints are secured
-via RBAC (see cluster-debugger role). By default, the profiling endpoints are
-accessible only by users bound to cluster-admin or cluster-debugger role.
-Profiling can not be disabled.
- CCE-84182-5
+ 1.1.14
+ There are various kubeconfig files that can be used by the administrator,
+defining various settings for the administration of the cluster. These files
+contain credentials that can be used to control the cluster and are needed
+for disaster recovery and each kubeconfig points to a different endpoint in
+the cluster. You should restrict its file permissions to maintain the
+integrity of the kubeconfig file as an attacker who gains access to these
+files can take over the cluster.
+
+ CCE-83719-5
-
-
+
-
+
-
- Ensure that the cluster-admin role is only used where required
- The RBAC role cluster-admin provides wide-ranging powers over the
-environment and should be used only where and when needed.
+
+ Verify User Who Owns The OpenShift Multus Container Network Interface Plugin Files
+ To properly set the owner of /var/run/multus/cni/net.d/*, run the command: $ sudo chown root /var/run/multus/cni/net.d/*
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
- Req-7.1.2
- Req-10.5.1
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.1.1
- Kubernetes provides a set of default roles where RBAC is used. Some of these
-roles such as cluster-admin provide wide-ranging privileges which should
-only be applied where absolutely necessary. Roles such as cluster-admin
-allow super-user access to perform any action on any resource. When used in
-a ClusterRoleBinding, it gives full control over every resource in the
-cluster and in all namespaces. When used in a RoleBinding, it gives full
-control over every resource in the rolebinding's namespace, including the
-namespace itself.
+ 1.1.10
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83603-1
+
+
+
-
+
-
- Limit Access to Kubernetes Secrets
- The Kubernetes API stores secrets, which may be service
-account tokens for the Kubernetes API or credentials used
-by workloads in the cluster. Access to these secrets should
-be restricted to the smallest possible group of users to
-reduce the risk of privilege escalation. To restrict users from
-secrets, remove get, list, and watch
-access to unauthorized users to secret objects in the cluster.
+
+ Verify User Who Owns The OpenShift PKI Certificate Files
+
+To properly set the owner of /etc/kubernetes/static-pod-resources/*/*/*/tls.crt, run the command:
+$ sudo chown root /etc/kubernetes/static-pod-resources/*/*/*/tls.crt
+ This rule is only applicable for nodes that run the Kubernetes Control Plane.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.1.2
- Inappropriate access to secrets stored within the Kubernetes
-cluster can allow for an attacker to gain additional access to
-the Kubernetes cluster or external resources whose credentials
-are stored as secrets.
+ 1.1.19
+ OpenShift makes use of a number of certificates as part of its operation.
+You should verify the ownership of the directory containing the PKI
+information and all files in that directory to maintain their integrity.
+
+ CCE-83558-7
+
+
+
-
+
-
- Minimize Access to Pod Creation
- The ability to create pods in a namespace can provide a
-number of opportunities for privilege escalation. Where
-applicable, remove create access to pod
-objects in the cluster.
+
+ Verify User Who Owns The OpenShift PKI Private Key Files
+
+To properly set the owner of /etc/kubernetes/static-pod-resources/*/*/*/*.key, run the command:
+$ sudo chown root /etc/kubernetes/static-pod-resources/*/*/*/*.key
+ This rule is only applicable for nodes that run the Kubernetes Control Plane.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.1.4
- The ability to create pods in a cluster opens up the cluster
-for privilege escalation.
+ 1.1.19
+ OpenShift makes use of a number of certificates as part of its operation.
+You should verify the ownership of the directory containing the PKI
+information and all files in that directory to maintain their integrity.
+The directory and files should be owned by root:root.
+
+ CCE-83435-8
+
+
+
-
+
-
- Ensure roles are defined in the cluster
+
+ Verify User Who Owns The OpenShift SDN CNI Server Config
-
-RBAC is a critical feature in terms of security for Kubernetes and
-OpenShift. It enables administrators to segment the privileges
-granted to a service account, and thus allows us to limit the
-access to resources that they get. By defining roles appropriately
-one is able to codify organizational policy. [1]
-
-
-[1]
- https://docs.openshift.com/container-platform/latest/authentication/using-rbac.html
-
-
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/roles?limit=1000 API endpoint to the local /apis/rbac.authorization.k8s.io/v1/roles?limit=1000 file.
- Req-7.1.1
- By defining RBAC roles, one is able to limit the permissions
-given to a Service Account, and thus limit the blast radius
-that an account compromise would have.
- CCE-86588-1
-
-
-
-
-
-
- Minimize Wildcard Usage in Cluster and Local Roles
- Kubernetes Cluster and Local Roles provide access to resources
-based on sets of objects and actions that can be taken on
-those objects. It is possible to set either of these using a
-wildcard * which matches all items. This violates the
-principle of least privilege and leaves a cluster in a more
-vulnerable state to privilege abuse.
+To properly set the owner of /var/run/openshift-sdn/cniserver/config.json, run the command:
+$ sudo chown root /var/run/openshift-sdn/cniserver/config.json
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.1.3
- The principle of least privilege recommends that users are
-provided only the access required for their role and nothing
-more. The use of wildcard rights grants is likely to provide
-excessive rights to the Kubernetes API.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83932-4
+
+
+
-
+
-
-
- Kubernetes - Registry Security Practices
- Contains evaluations for Kubernetes registry security practices, and cluster-wide registry configuration.
-
- Allowed registries are configured
- The configuration registrySources.allowedRegistries determines the
-permitted registries that the OpenShift container runtime can access for builds
-and pods. This configuration setting ensures that all registries other than
-those specified are blocked.
-
-You can set the allowed repositories by applying the following manifest using
-oc patch, e.g. if you save the following snippet to
-/tmp/allowed-registries-patch.yaml
-
-spec:
- registrySources:
- allowedRegistries:
- - my-trusted-registry.internal.example.com
- you would call
-oc patch image.config.openshift.io cluster --patch="$(cat /tmp/allowed-registries-patch.yaml)" --type=merge
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /apis/config.openshift.io/v1/images/cluster file.
- CM-5(3)
- CM-7(2)
- CM-7(5)
- CM-11
- SRG-APP-000131-CTR-000280
- SRG-APP-000131-CTR-000285
- SRG-APP-000384-CTR-000915
- Allowed registries should be configured to restrict the registries that the
-OpenShift container runtime can access, and all other registries should be
-blocked.
+
+ Verify User Who Owns The OpenShift Open vSwitch Files
+ To properly set the owner of /etc/openvswitch/.*, run the command: $ sudo chown root /etc/openvswitch/.*
+ 1.1.10
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
-
-
+
-
-
- Allowed registries for import are configured
- The configuration allowedRegistriesForImport limits the container
-image registries from which normal users may import images. This is important
-to control, as a user who can stand up a malicious registry can then import
-content which claims to include the SHAs of legimitate content layers.
-You can set the allowed repositories for import by applying the following
-manifest using oc patch, e.g. if you save the following snippet to
-/tmp/allowed-import-registries-patch.yaml
-
-spec:
- allowedRegistriesForImport:
- - domainName: my-trusted-registry.internal.example.com
- insecure: false
- you would call
-oc patch image.config.openshift.io cluster --patch="$(cat /tmp/allowed-import-registries-patch.yaml)" --type=merge
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /apis/config.openshift.io/v1/images/cluster file.
- CM-5(3)
- CM-7(2)
- CM-7(5)
- CM-11
- SRG-APP-000131-CTR-000280
- SRG-APP-000131-CTR-000285
- SRG-APP-000384-CTR-000915
- Allowed registries for import should be specified to limit the registries
-from which users may import images.
-
-
-
+
+
-
-
- OpenShift - Risk Assessment Settings
- Contains evaluations for the cluster's risk assessment configuration settings.
-
- Ensure that Compliance Operator is scanning the cluster
- The Compliance Operator
-scans the hosts and the platform (OCP)
-configurations for software flaws and improper configurations according
-to different compliance benchmarks. It uses OpenSCAP as a backend,
-which is a known and certified tool to do such scans.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/compliance.openshift.io/v1alpha1/compliancesuites?limit=5 API endpoint to the local /apis/compliance.openshift.io/v1alpha1/compliancesuites?limit=5 file.
- CIP-003-8 R1.3
- CIP-003-8 R4.3
+
+ Verify User Who Owns The Open vSwitch Configuration Database
+
+To properly set the owner of /etc/openvswitch/conf.db, run the command:
+$ sudo chown openvswitch /etc/openvswitch/conf.db
CIP-003-8 R6
- CIP-004-6 4.1
- CIP-004-6 4.2
CIP-004-6 R3
- CIP-004-6 R4
- CIP-004-6 R4.2
- CIP-005-6 R1
- CIP-005-6 R1.1
- CIP-005-6 R1.2
- CIP-007-3 R3
- CIP-007-3 R3.1
CIP-007-3 R6.1
- CIP-007-3 R8.4
CM-6
CM-6(1)
- RA-5
- RA-5(5)
- SA-4(8)
- Req-2.2.4
- SRG-APP-000414-CTR-001010
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- Vulnerability scanning and risk management are important detective controls
-for all systems, to detect potential flaws and unauthorised access.
- CCE-83697-3
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83489-5
-
-
+
-
+
-
-
- Security Context Constraints (SCC)
- Similar to the way that RBAC resources control user access,
-administrators can use Security Context Constraints (SCCs)
-to control permissions for pods. These permissions include
-actions that a pod, a collection of containers, can perform
-and what resources it can access. You can use SCCs to define
-a set of conditions that a pod must run with in order to be
-accepted into the system.
-
- Permitted SCCs with allowedCapabilities
- A regular expression that lists all SCCs that are permitted to set the allowedCapabilities attribute
- ^privileged$|^hostnetwork-v2$|^restricted-v2$|^nonroot-v2$
-
-
- Drop Container Capabilities
- Containers should not enable more capabilites than needed as this
-opens the door for malicious use. To disable the
-capabilities, the appropriate Security Context Constraints (SCCs)
-should set all capabilities as * or a list of capabilities in
-requiredDropCapabilities.
+
+ Verify User Who Owns The Open vSwitch Configuration Database Lock
+
+To properly set the owner of /etc/openvswitch/.conf.db.~lock~, run the command:
+$ sudo chown openvswitch /etc/openvswitch/.conf.db.~lock~
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.2.9
- By default, containers run with a default set of capabilities as assigned
-by the Container Runtime which can include dangerous or highly privileged
-capabilities. Capabilities should be dropped unless absolutely critical for
-the container to run software as added capabilities that are not required
-allow for malicious containers or attackers.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83462-2
+
+
+
-
+
-
- Limit Container Capabilities
- Containers should not enable more capabilites than needed as this
-opens the door for malicious use. To enable only the
-required capabilities, the appropriate Security Context Constraints (SCCs)
-should set capabilities as a list in allowedCapabilities.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-/apis/security.openshift.io/v1/securitycontextconstraints
- API endpoint, filter with with the jq utility using the following filter
- [.items[] | select(.metadata.name | test("{{.var_sccs_with_allowed_capabilities_regex}}"; "") | not)] | map(.allowedCapabilities == null)
- and persist it to the local
- /apis/security.openshift.io/v1/securitycontextconstraints#395df9a25b06bd949effbff7e3071c03493e0dd679ee1c7bfcfcb35647e9328c
- file.
-
+
+ Verify User Who Owns The Open vSwitch Process ID File
+
+To properly set the owner of /var/run/openvswitch/ovs-vswitchd.pid, run the command:
+$ sudo chown openvswitch /var/run/openvswitch/ovs-vswitchd.pid
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.2.8
- By default, containers run with a default set of capabilities as assigned
-by the Container Runtime which can include dangerous or highly privileged
-capabilities. Capabilities should be dropped unless absolutely critical for
-the container to run software as added capabilities that are not required
-allow for malicious containers or attackers.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83937-3
-
-
+
-
+
-
- Limit Access to the Host IPC Namespace
- Containers should not be allowed access to the host's Interprocess Commication (IPC)
-namespace. To prevent containers from getting access to a host's
-IPC namespace, the appropriate Security Context Constraints (SCCs)
-should set allowHostIPC to false.
+
+ Verify User Who Owns The Open vSwitch Persistent System ID
+
+To properly set the owner of /etc/openvswitch/system-id.conf, run the command:
+$ sudo chown openvswitch /etc/openvswitch/system-id.conf
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.2.3
- A container running in the host's IPC namespace can use IPC
-to interact with processes outside the container potentially
-allowing an attacker to exploit a host process thereby enabling an
-attacker to exploit other services.
- CCE-84042-1
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-84085-0
+
+
+
-
+
-
- Limit Use of the CAP_NET_RAW
- Containers should not enable more capabilites than needed as this
-opens the door for malicious use. CAP_NET_RAW enables a container
-to launch a network attack on another container or cluster. To disable the
-CAP_NET_RAW capability, the appropriate Security Context Constraints (SCCs)
-should set NET_RAW in requiredDropCapabilities.
+
+ Verify User Who Owns The Open vSwitch Daemon PID File
+
+To properly set the owner of /run/openvswitch/ovs-vswitchd.pid, run the command:
+$ sudo chown openvswitch /run/openvswitch/ovs-vswitchd.pid
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.2.7
- By default, containers run with a default set of capabilities as assigned
-by the Container Runtime which can include dangerous or highly privileged
-capabilities. If the CAP_NET_RAW is enabled, it may be misused
-by malicious containers or attackers.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83888-8
+
+
+
-
+
-
- Limit Access to the Host Network Namespace
- Containers should not be allowed access to the host's network
-namespace. To prevent containers from getting access to a host's
-network namespace, the appropriate Security Context Constraints (SCCs)
-should set allowHostNetwork to false.
+
+ Verify User Who Owns The Open vSwitch Database Server PID
+
+To properly set the owner of /run/openvswitch/ovsdb-server.pid, run the command:
+$ sudo chown openvswitch /run/openvswitch/ovsdb-server.pid
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.2.4
- A container running in the host's network namespace could
-access the host network traffic to and from other pods
-potentially allowing an attacker to exploit pods and network
-traffic.
- CCE-83492-9
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83806-0
+
+
+
-
+
-
- Limit Containers Ability to Escalate Privileges
- Containers should be limited to only the privileges required
-to run and should not be allowed to escalate their privileges.
-To prevent containers from escalating privileges,
-the appropriate Security Context Constraints (SCCs)
-should set allowPrivilegeEscalation to false.
+
+ Verify User Who Owns The Kubernetes Scheduler Kubeconfig File
+
+To properly set the owner of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/configmaps/scheduler-kubeconfig/kubeconfig, run the command:
+$ sudo chown root /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/configmaps/scheduler-kubeconfig/kubeconfig
+ This rule is only applicable for nodes that run the Kubernetes Scheduler service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.2.5
- Privileged containers have access to more of the Linux Kernel
-capabilities and devices. If a privileged container were
-compromised, an attacker would have full access to the container
-and host.
- CCE-83447-3
+ 1.1.16
+ The kubeconfig for the Scheduler contains paramters for the scheduler
+to access the Kube API.
+You should set its file ownership to maintain the integrity of the file.
+
+ CCE-84017-3
+
+
+
-
+
-
- Limit Privileged Container Use
- Containers should be limited to only the privileges required
-to run. To prevent containers from running as privileged containers,
-the appropriate Security Context Constraints (SCCs) should set
-allowPrivilegedContainer to false.
+
+ Verify User Who Owns The OpenShift etcd Data Directory
+ To properly set the owner of /var/lib/etcd, run the command: $ sudo chown root /var/lib/etcd
+ 1.1.12
+ The /var/lib/etcd directory contains highly-avaliable distributed key/value data storage
+across an OpenShift cluster. Allowing access to users to this directory could compromise OpenShift
+data and the cluster.
+
+
+
+
+
+
+
+
+ Verify Permissions on the OpenShift Container Network Interface Files
+
+To properly set the permissions of /etc/cni/net.d/*, run the command:
+$ sudo chmod 0644 /etc/cni/net.d/*
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.2.1
- Privileged containers have access to all Linux Kernel
-capabilities and devices. If a privileged container were
-compromised, an attacker would have full access to the container
-and host.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83379-8
+
+
+
-
+
-
- Limit Access to the Host Process ID Namespace
- Containers should not be allowed access to the host's process
-ID namespace. To prevent containers from getting access to a host's
-process ID namespace, the appropriate Security Context Constraints (SCCs)
-should set allowHostPID to false.
+
+ Verify Permissions on the OpenShift Controller Manager Kubeconfig File
+
+To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig, run the command:
+$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/configmaps/controller-manager-kubeconfig/kubeconfig
+ This rule is only applicable for nodes that run the Kubernetes Controller Manager service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.2.2
- A container running in the host's PID namespace can inspect
-processes running outside the container which can be used to
-escalate privileges outside of the container.
+ 1.1.17
+ The Controller Manager's kubeconfig contains information about how the
+component will access the API server. You should restrict its file
+permissions to maintain the integrity of the file. The file should be
+writable by only the administrators on the system.
+
+ CCE-83604-9
+
+
+
-
+
-
- Limit Container Running As Root User
- Containers should be limited to only the privileges required
-to run and should very rarely be run as root user. To prevent
-containers from running as root user,
-the appropriate Security Context Constraints (SCCs) should set
-allowPrivilegedContainer to false.
+
+ Verify Permissions on the Etcd Database Directory
+
+To properly set the permissions of /var/lib/etcd, run the command:
+$ sudo chmod 0700 /var/lib/etcd
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.2.6
- Privileged containers have access to all Linux Kernel
-capabilities and devices. If a privileged container were
-compromised, an attacker would have full access to the container
-and host.
+ 1.1.11
+ etcd is a highly-available key-value store used by Kubernetes deployments for persistent
+storage of all of its REST API objects. This data directory should be protected from any
+unauthorized reads or writes. It should not be readable or writable by any group members
+or the world.
+
+ CCE-84013-2
+
+
+
-
+
-
-
- OpenShift - Kubernetes - Scheduler Settings
- Contains evaluations for kube-scheduler configuration settings.
-
- Kube scheduler config filter
- Kube scheduler filter
- [.data."pod.yaml"]
-
-
- Kube scheduler config file path
- Kube scheduler config file path
- /api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod
-
-
- Ensure that the bind-address parameter is not used
- The Scheduler API service which runs on port 10251/TCP by default is used for
-health and metrics information and is available without authentication or
-encryption. As such it should only be bound to a localhost interface, to
-minimize the cluster's attack surface.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-{{.var_scheduler_filepath}}
- API endpoint, filter with with the jq utility using the following filter
- {{.var_scheduler_argument_filter}}
- and persist it to the local
- /api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod#569895645b4f9b87d4e21ab3c6fe4cc03627259826715e5043d5d8889c6c12d3
- file.
-
- CIP-003-8 R4.2
+
+ Verify Permissions on the Etcd Write-Ahead-Log Files
+
+To properly set the permissions of /var/lib/etcd/member/wal/*, run the command:
+$ sudo chmod 0600 /var/lib/etcd/member/wal/*
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
- CIP-007-3 R5.1
CIP-007-3 R6.1
CM-6
CM-6(1)
- SC-8
- SC-8(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 1.4.2
- In OpenShift 4, The Kubernetes Scheduler operator manages and updates the
-Kubernetes Scheduler deployed on top of OpenShift. By default, the operator
-exposes metrics via metrics service. The metrics are collected from the
-Kubernetes Scheduler operator. Profiling data is sent to healthzPort,
-the port of the localhost healthz endpoint. Changing this value may disrupt
-components that monitor the kubelet health.
- CCE-83674-2
-
-
-
-
-
-
-
-
-
- Ensure that the port parameter is zero
- The Scheduler API service which runs on port 10251/TCP by default is used for
-health and metrics information and is available without authentication or
-encryption. As such it should only be bound to a localhost interface, to
-minimize the cluster's attack surface.
- This rule's check operates on the cluster configuration dump.
-Therefore, you need to use a tool that can query the OCP API, retrieve the following:
-{{.var_scheduler_filepath}}
- API endpoint, filter with with the jq utility using the following filter
- {{.var_scheduler_argument_filter}}
- and persist it to the local
- /api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod#569895645b4f9b87d4e21ab3c6fe4cc03627259826715e5043d5d8889c6c12d3
- file.
-
- 1.4.2
- In OpenShift 4, The Kubernetes Scheduler operator manages and updates the
-Kubernetes Scheduler deployed on top of OpenShift. By default, the operator
-exposes metrics via metrics service. The metrics are collected from the
-Kubernetes Scheduler operator. Profiling data is sent to healthzPort,
-the port of the localhost healthz endpoint. Changing this value may disrupt
-components that monitor the kubelet health.
+ 1.1.11
+ etcd is a highly-available key-value store used by Kubernetes deployments for persistent
+storage of all of its REST API objects. This data directory should be protected from any
+unauthorized reads or writes. It should not be readable or writable by any group members
+or the world.
+
+ CCE-83382-2
-
-
+
-
+
-
-
- Kubernetes Secrets Management
- Secrets let you store and manage sensitive information,
-such as passwords, OAuth tokens, and ssh keys.
-Such information might otherwise be put in a Pod
-specification or in an image.
-
- Consider external secret storage
- Consider the use of an external secrets storage and management system,
-instead of using Kubernetes Secrets directly, if you have more complex
-secret management needs. Ensure the solution requires authentication to
-access secrets, has auditing of access to and use of secrets, and encrypts
-secrets. Some solutions also make it easier to rotate secrets.
+
+ Verify Permissions on the Etcd Member Pod Specification File
+
+To properly set the permissions of /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml, run the command:
+$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/etcd-pod-*/etcd-pod.yaml
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.4.2
- Kubernetes supports secrets as first-class objects, but care needs to be
-taken to ensure that access to secrets is carefully limited. Using an
-external secrets provider can ease the management of access to secrets,
-especially where secrets are used across both Kubernetes and non-Kubernetes
-environments.
+ 1.1.7
+ The etcd pod specification file controls various parameters that
+set the behavior of the etcd service in the master node. etcd is a
+highly-available key-value store which Kubernetes uses for persistent
+storage of all of its REST API object. You should restrict its file
+permissions to maintain the integrity of the file. The file should be
+writable by only the administrators on the system.
+
+ CCE-83973-8
+
+
+
-
+
-
- Do Not Use Environment Variables with Secrets
- Secrets should be mounted as data volumes instead of environment
-variables.
+
+ Verify Permissions on the Etcd PKI Certificate Files
+
+To properly set the permissions of /etc/kubernetes/static-pod-resources/etcd-*/secrets/*/*.crt, run the command:
+$ sudo chmod 0600 /etc/kubernetes/static-pod-resources/etcd-*/secrets/*/*.crt
+ This rule is only applicable for nodes that run the Etcd service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 5.4.1
- Environment variables are subject and very susceptible to
-malicious hijacking methods by an adversary, as such,
-environment variables should never be used for secrets.
+ 1.1.20
+ OpenShift makes use of a number of certificate files as part of the operation
+of its components. The permissions on these files should be set to
+600 or more restrictive to protect their integrity.
+
+ CCE-83362-4
+
+
+
-
+
-
-
- Kubernetes - Worker Node Settings
- Contains evaluations for the worker node configuration settings.
-
- Verify Group Who Owns The Kubelet Configuration File
- To properly set the group owner of /etc/kubernetes/kubelet.conf, run the command: $ sudo chgrp root /etc/kubernetes/kubelet.conf
+
+ Verify Permissions on the OpenShift SDN Container Network Interface Plugin IP Address Allocations
+
+To properly set the permissions of /var/lib/cni/networks/openshift-sdn/*, run the command:
+$ sudo chmod 0644 /var/lib/cni/networks/openshift-sdn/*
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14655,55 +16341,58 @@ environment variables should never be used for secrets.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.6
- The kubelet configuration file contains information about the configuration of the
-OpenShift node that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
- CCE-84233-6
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-83469-7
-
+
-
+
-
- Verify Group Who Owns The Worker Proxy Kubeconfig File
- To ensure the Kubernetes ConfigMap is mounted into the sdn daemonset pods with the
-correct ownership, make sure that the sdn-config ConfigMap is mounted using
-a ConfigMap at the /config mount point and that the sdn container
-points to that configuration using the --proxy-config command line option.
-Run:
- oc get -nopenshift-sdn ds sdn -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "sdn")'
-and ensure the --proxy-config parameter points to
-/config/kube-proxy-config.yaml and that the config mount point is
-mounted from the sdn-config ConfigMap.
+
+ Verify Permissions on the Kubernetes API Server Pod Specification File
+
+To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml, run the command:
+$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-apiserver-pod-*/kube-apiserver-pod.yaml
+ This rule is only applicable for nodes that run the Kubernetes API Server service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.4
- The kubeconfig file for kube-proxy provides permissions to the kube-proxy service.
-The proxy kubeconfig file contains information about the administrative configuration of the
-OpenShift cluster that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
-The file is provided via a ConfigMap mount, so the kubelet itself makes sure that the
-file permissions are appropriate for the container taking it into use.
-
+ 1.1.1
+ If the Kubernetes specification file is writable by a group-owner or the
+world the risk of its compromise is increased. The file contains the configuration of
+the Kubernetes API server that is configured on the system. Protection of this file is
+critical for OpenShift security.
+
+ CCE-83983-7
+
+
+
-
+
-
- Verify Group Who Owns the Worker Certificate Authority File
- To properly set the group owner of /etc/kubernetes/kubelet-ca.crt, run the command: $ sudo chgrp root /etc/kubernetes/kubelet-ca.crt
+
+ Verify Permissions on the Kubernetes Controller Manager Pod Specificiation File
+
+To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml, run the command:
+$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-*/kube-controller-manager-pod.yaml
+ This rule is only applicable for nodes that run the Kubernetes Controller Manager service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14712,22 +16401,56 @@ file permissions are appropriate for the container taking it into use.SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.8
- The worker certificate authority file contains the certificate authority
-certificate for an OpenShift node that is configured on the system. Protection of this file is
+ 1.1.3
+ If the Kubernetes specification file is writable by a group-owner or the
+world the risk of its compromise is increased. The file contains the configuration of
+an Kubernetes Controller Manager server that is configured on the system. Protection of this file is
critical for OpenShift security.
-
- CCE-83440-8
+
+ CCE-84161-9
-
+
-
+
-
- Verify Group Who Owns The Worker Kubeconfig File
- To properly set the group owner of /var/lib/kubelet/kubeconfig, run the command: $ sudo chgrp root /var/lib/kubelet/kubeconfig
+
+ Verify Permissions on the Kube Scheduler Pod Specification File
+
+To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-scheduler-pod.yaml, run the command:
+$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-scheduler-pod.yaml
+ 1.1.5
+ If the Kube specification file is writable by a group-owner or the
+world the risk of its compromise is increased. The file contains the configuration of
+an OpenShift scheduler that is configured on the system. Protection of this file is
+critical for OpenShift security.
+
+
+
+
+
+ Verify Permissions on the OpenShift Admin Kubeconfig File
+
+To properly set the permissions of /etc/kubernetes/kubeconfig, run the command:
+$ sudo chmod 0600 /etc/kubernetes/kubeconfig
+ 1.1.13
+ If the /etc/kubernetes/kubeconfig file is writable by a group-owner or the
+world the risk of its compromise is increased. The file contains the administration configuration of the
+OpenShift cluster that is configured on the system. Protection of this file is
+critical for OpenShift security.
+
+
+
+
+
+ Verify Permissions on the OpenShift Admin Kubeconfig Files
+
+To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig, run the command:
+$ sudo chmod 0600 /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/*.kubeconfig
+ This rule is only applicable for nodes that run the Kubernetes Control Plane.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14736,24 +16459,28 @@ critical for OpenShift security.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.10
- The worker kubeconfig file contains information about the administrative configuration of the
-OpenShift cluster that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
- CCE-83409-3
+ 1.1.13
+ There are various kubeconfig files that can be used by the administrator,
+defining various settings for the administration of the cluster. These files
+contain credentials that can be used to control the cluster and are needed
+for disaster recovery and each kubeconfig points to a different endpoint in
+the cluster. You should restrict its file permissions to maintain the
+integrity of the kubeconfig file as an attacker who gains access to these
+files can take over the cluster.
+
+ CCE-84278-1
-
+
-
+
-
- Verify Group Who Owns The OpenShift Node Service File
- '
- To properly set the group owner of /etc/systemd/system/kubelet.service, run the command:
- $ sudo chgrp root /etc/systemd/system/kubelet.service'
+
+ Verify Permissions on the OpenShift Multus Container Network Interface Plugin Files
+
+To properly set the permissions of /var/run/multus/cni/net.d/*, run the command:
+$ sudo chmod 0644 /var/run/multus/cni/net.d/*
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14762,23 +16489,28 @@ critical for OpenShift security.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.2
- The /etc/systemd/system/kubelet.service
-file contains information about the configuration of the
-OpenShift node service that is configured on the system. Protection of this file is
-critical for OpenShift security.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
- CCE-83975-3
+ CCE-83467-1
-
+
-
+
-
- Verify User Who Owns The Kubelet Configuration File
- To properly set the owner of /var/lib/kubelet/config.json, run the command: $ sudo chown root /var/lib/kubelet/config.json
+
+ Verify Permissions on the OpenShift PKI Certificate Files
+
+To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-*/secrets/*/tls.crt, run the command:
+$ sudo chmod 0600 /etc/kubernetes/static-pod-resources/kube-*/secrets/*/tls.crt
+ This rule is only applicable for nodes that run the Kubernetes Control Plane.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14787,22 +16519,83 @@ critical for OpenShift security.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.6
- The kubelet configuration file contains information about the configuration of the
-OpenShift node that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
- CCE-85900-9
+ 1.1.20
+ OpenShift makes use of a number of certificate files as part of the operation
+of its components. The permissions on these files should be set to
+600 or more restrictive to protect their integrity.
+
+ CCE-83552-0
-
+
-
+
-
- Verify User Who Owns The Kubelet Configuration File
- To properly set the owner of /etc/kubernetes/kubelet.conf, run the command: $ sudo chown root /etc/kubernetes/kubelet.conf
+
+ Verify Permissions on the OpenShift PKI Private Key Files
+
+To properly set the permissions of /etc/kubernetes/static-pod-resources/*/*/*/*.key, run the command:
+$ sudo chmod 0600 /etc/kubernetes/static-pod-resources/*/*/*/*.key
+ This rule is only applicable for nodes that run the Kubernetes Control Plane.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
+ CIP-003-8 R1.3
+ CIP-003-8 R3
+ CIP-003-8 R3.1
+ CIP-003-8 R3.2
+ CIP-003-8 R3.3
+ CIP-003-8 R5.1.1
+ CIP-003-8 R5.3
+ CIP-004-6 R2.2.3
+ CIP-004-6 R2.3
+ CIP-007-3 R5.1
+ CIP-007-3 R5.1.2
+ CIP-007-3 R5.2
+ CIP-007-3 R5.3.1
+ CIP-007-3 R5.3.2
+ CIP-007-3 R5.3.3
+ CM-6
+ CM-6(1)
+ IA-5(2)
+ SRG-APP-000516-CTR-001325
+ SRG-APP-000516-CTR-001330
+ SRG-APP-000516-CTR-001335
+ 1.1.21
+ OpenShift makes use of a number of key files as part of the operation of its
+components. The permissions on these files should be set to 600
+to protect their integrity and confidentiality.
+
+ CCE-83580-1
+
+
+
+
+
+
+
+
+ Verify Permissions on the OpenShift Open vSwitch Files
+
+To properly set the permissions of /etc/openvswitch/.*, run the command:
+$ sudo chmod 0644 /etc/openvswitch/.*
+ 1.4.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+
+
+
+
+
+
+
+ Verify Permissions on the Open vSwitch Configuration Database
+
+To properly set the permissions of /etc/openvswitch/conf.db, run the command:
+$ sudo chmod 0640 /etc/openvswitch/conf.db
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14811,55 +16604,52 @@ critical for OpenShift security.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.6
- The kubelet configuration file contains information about the configuration of the
-OpenShift node that is configured on the system. Protection of this file is
-critical for OpenShift security.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
- CCE-83976-1
+ CCE-83788-0
-
+
-
+
-
- Verify User Who Owns The Worker Proxy Kubeconfig File
- To ensure the Kubernetes ConfigMap is mounted into the sdn daemonset pods with the
-correct ownership, make sure that the sdn-config ConfigMap is mounted using
-a ConfigMap at the /config mount point and that the sdn container
-points to that configuration using the --proxy-config command line option.
-Run:
- oc get -nopenshift-sdn ds sdn -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "sdn")'
-and ensure the --proxy-config parameter points to
-/config/kube-proxy-config.yaml and that the config mount point is
-mounted from the sdn-config ConfigMap.
+
+ Verify Permissions on the Open vSwitch Configuration Database Lock
+
+To properly set the permissions of /etc/openvswitch/.conf.db.~lock~, run the command:
+$ sudo chmod 0600 /etc/openvswitch/.conf.db.~lock~
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
CM-6
CM-6(1)
- Req-2.2
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.4
- The kubeconfig file for kube-proxy provides permissions to the kube-proxy service.
-The proxy kubeconfig file contains information about the administrative configuration of the
-OpenShift cluster that is configured on the system. Protection of this file is
-critical for OpenShift security.
-
-The file is provided via a ConfigMap mount, so the kubelet itself makes sure that the
-file permissions are appropriate for the container taking it into use.
-
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
+
+ CCE-84202-1
+
+
+
-
+
-
- Verify User Who Owns the Worker Certificate Authority File
- To properly set the owner of /etc/kubernetes/kubelet-ca.crt, run the command: $ sudo chown root /etc/kubernetes/kubelet-ca.crt
+
+ Verify Permissions on the Open vSwitch Process ID File
+
+To properly set the permissions of /var/run/openvswitch/ovs-vswitchd.pid, run the command:
+$ sudo chmod 0644 /var/run/openvswitch/ovs-vswitchd.pid
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14868,22 +16658,25 @@ file permissions are appropriate for the container taking it into use.SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.8
- The worker certificate authority file contains the certificate authority
-certificate for an OpenShift node that is configured on the system. Protection of this file is
-critical for OpenShift security.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
- CCE-83495-2
+ CCE-83666-8
-
+
-
+
-
- Verify User Who Owns The Worker Kubeconfig File
- To properly set the owner of /var/lib/kubelet/kubeconfig, run the command: $ sudo chown root /var/lib/kubelet/kubeconfig
+
+ Verify Permissions on the Open vSwitch Persistent System ID
+
+To properly set the permissions of /etc/openvswitch/system-id.conf, run the command:
+$ sudo chmod 0644 /etc/openvswitch/system-id.conf
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14892,24 +16685,25 @@ critical for OpenShift security.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.10
- The worker kubeconfig file contains information about the administrative configuration of the
-OpenShift cluster that is configured on the system. Protection of this file is
-critical for OpenShift security.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
- CCE-83408-5
+ CCE-83400-2
-
+
-
+
-
- Verify User Who Owns The OpenShift Node Service File
- '
- To properly set the owner of /etc/systemd/system/kubelet.service, run the command:
- $ sudo chown root /etc/systemd/system/kubelet.service '
+
+ Verify Permissions on the Open vSwitch Daemon PID File
+
+To properly set the permissions of /run/openvswitch/ovs-vswitchd.pid, run the command:
+$ sudo chmod 0644 /run/openvswitch/ovs-vswitchd.pid
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14918,25 +16712,25 @@ critical for OpenShift security.
SRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.2
- The /etc/systemd/system/kubelet.service
-file contains information about the configuration of the
-OpenShift node service that is configured on the system. Protection of this file is
-critical for OpenShift security.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
- CCE-84193-2
+ CCE-83710-4
-
+
-
+
-
- Verify Permissions on The Kubelet Configuration File
+
+ Verify Permissions on the Open vSwitch Database Server PID
-To properly set the permissions of /var/lib/kubelet/config.json, run the command:
-$ sudo chmod 0600 /var/lib/kubelet/config.json
+To properly set the permissions of /run/openvswitch/ovsdb-server.pid, run the command:
+$ sudo chmod 0644 /run/openvswitch/ovsdb-server.pid
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14945,25 +16739,28 @@ To properly set the permissions of /var/lib/kubelet/config.jsonSRG-APP-000516-CTR-001325
SRG-APP-000516-CTR-001330
SRG-APP-000516-CTR-001335
- 4.1.5
- If the kubelet configuration file is writable by a group-owner or the
-world the risk of its compromise is increased. The file contains the configuration of
-an OpenShift node that is configured on the system. Protection of this file is
-critical for OpenShift security.
+ 1.1.9
+ CNI (Container Network Interface) files consist of a specification and libraries for
+writing plugins to configure network interfaces in Linux containers, along with a number
+of supported plugins. Allowing writeable access to the files could allow an attacker to modify
+the networking configuration potentially adding a rogue network connection.
- CCE-85896-9
+ CCE-83679-1
-
+
-
+
-
- Verify Permissions on The Kubelet Configuration File
+
+ Verify Permissions on the Kubernetes Scheduler Pod Specification File
-To properly set the permissions of /etc/kubernetes/kubelet.conf, run the command:
-$ sudo chmod 0644 /etc/kubernetes/kubelet.conf
+To properly set the permissions of /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml, run the command:
+$ sudo chmod 0644 /etc/kubernetes/static-pod-resources/kube-scheduler-pod-*/kube-scheduler-pod.yaml
+ This rule is only applicable for nodes that run the Kubernetes Scheduler service.
+The aforementioned service is only running on the nodes labeled
+"master" by default.
CIP-003-8 R6
CIP-004-6 R3
CIP-007-3 R6.1
@@ -14972,71 +16769,71 @@ To properly set the permissions of /etc/kubernetes/kubelet.conf