Skip to content

Commit

Permalink
Stable code commit for v0.2
Browse files Browse the repository at this point in the history
  • Loading branch information
mohitagrawal-crest committed Jul 20, 2020
1 parent c263ec8 commit f27493e
Show file tree
Hide file tree
Showing 294 changed files with 12,354 additions and 1,517 deletions.
47 changes: 28 additions & 19 deletions ConsulExtension/Media/Readme/readme.txt
Original file line number Diff line number Diff line change
@@ -1,42 +1,51 @@
The Consul Extension for ACI (Beta) application provides ACI administrators L4-L7 Service Mesh visibility and an automated way to manage L2-L3 infrastructure based on L4-L7 service requirements.
Monitor and Optimize Application Connectivity in Any Environment
The Consul Extension for ACI application enables greater control over Day 2+ operations and visibility into Layer 4/7 data of applications running in networks managed by Cisco APIC. Using the Consul Extension for ACI, network operators will be able to respond more quickly to connectivity issues and reduce the Mean-time-to-Resolution (MTTR). As the network topology becomes more dynamic and complex, Consul and ACI provide a consistent, automated workflow for gathering application information and health data.

This application offers enhanced Consul-to-ACI L4-L7 service visibility including dynamic service health, enabling faster mean-time-to-resolution (MTTR); and L4-L7 service mesh intention driven dynamic Network Middleware Automation.

Service visibility and faster Mean-time-to-Resolution (MTTR):
- Real-time visibility into dynamic L4-L7 services, service health and service-to-service communication on virtual, container and bare-metal workloads connected by the ACI multi-cloud network.
- Faster identification of issues based on service health and network data correlation.
ACI users should download the Consul Extension for ACI from the DC App Center. Once configured and a Consul Agent is added to the desired environments, ACI begins pulling information from Consul, including the number of agents running, services registered with those agents, nodes discovered by Consul, and any ACI endpoints with services that have been discovered by Consul.


Users can then use the Operations feature on the APIC Dashboard to get a list of existing services and create a visual map of the network topology. From here, operators can map each service to each ACI endpoint, drill down into specific service level data, and see if that service is actively reachable. In the future, Consul will be able to apply ACI policies directly to services.


Benefits of the Consul Extension for ACI:


End to End Service Visibility - Using the Consul Extension for ACI, network operators can retrieve Layer 4-7 service data for applications running at each ACI endpoint. This enables greater insights as to what services are currently running on the network.


Reduce Downtime & Failure Rates - Enable operators to trace connectivity issues at the service level and reduce the Mean-time-to-Resolution for network issues. Enable individuals to debug issues, rather as part of a broader, more time-intensive team effort.


Improve Productivity Across the Org - Develop stronger collaboration between application engineers and network operators by creating a single source of truth for information on applications. Enable ACI to provide a single pane of visibility for both developers and operators.

Network Middleware Automation:
- Consistent L4-L7 service mesh driven network policy (contracts and filters) automation for virtual, bare-metal and container workloads across private and public cloud for your ACI multi-cloud network.
- Easier transition to a secure service mesh based deployments for Applications teams and DevOps operators with the ACI multi-cloud network.

Features:
- Supports Consul Enterprise and Consul open-source deployments.
- Visibility into L4-L7 services running on multiple Consul Datacenters.
- Self-Discovery of an entire Consul Datacenter service catalog though a single seed agent(Consul Server).
- Improved Visibility and Day 2+ automation of L4-L7 services registered with Consul.
- Self-Discovery of all services registered with Consul’s service catalog though a single agent.
- Automated correlation of L4-L7 service-to-ACI fabric and logical topology.
- Dynamic Service Dashboard to view L4-L7 service health.


Highlights:
- Enhanced L4-L7 service visibility for L2-L3 ACI infrastructure.
- Green field and brown field deployment supported.
- Saves data configured by the application.
- NO impact on ACI and Consul configurations if the application is deleted.
- Maintains organization operational model and ownership.


Pre-requisites:
- APIC version 3.2(1l) or above
- Consul version 1.6.3/1.6.3+ent or above
- In-band or Out-of-band connectivity between APIC and Consul seed agent (Consul server) on TCP port 8500 and 8501.

Beta release limitations:
- Supported on Chrome web-browser only.
- Supported for on-premise APIC only.

Before you begin:
User guide: https://tinyurl.com/y9ztt362
FAQs: https://tinyurl.com/ya8b95j2
Support: https://github.com/ciscoecosystem/consul-aci/issues
User guide: tinyurl.com/y9ztt362
FAQs: tinyurl.com/ya8b95j2
Support: github.com/ciscoecosystem/consul-aci/issues


About Consul (https://www.consul.io/):
- Consul is a highly distributed service mesh solution by HashiCorp for providing a full featured control plane with service discovery, configuration, and segmentation functionality at L4-L7.
About HashiCorp Consul (www.consul.io/):
A service networking platform to connect and automate network configurations, discover services, and enable secure connectivity across any cloud or runtime.
145 changes: 79 additions & 66 deletions ConsulExtension/Service/alchemy_core.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
from sqlalchemy import create_engine
from sqlalchemy import Table, Column, ForeignKey, String, MetaData, PickleType, DateTime, Boolean
from datetime import datetime
from sqlalchemy import and_
from sqlalchemy.sql import select, text
from sqlalchemy.sql import select
from sqlalchemy.interfaces import PoolListener

from custom_logger import CustomLogger

Expand All @@ -11,6 +11,11 @@
DATABASE_NAME = 'sqlite:///ConsulDatabase.db'


class MyListener(PoolListener):
def connect(self, dbapi_con, con_record):
dbapi_con.execute('pragma journal_mode=WAL')


class Database:
"""Database class with all db functionalities"""

Expand Down Expand Up @@ -145,6 +150,7 @@ class Database:
'vrf',
'epg_health',
'app_profile',
'epg_alias',
'created_ts',
'updated_ts',
'last_checked_ts'
Expand All @@ -158,12 +164,11 @@ class Database:

def __init__(self):
try:
self.engine = create_engine(DATABASE_NAME)
self.conn = self.engine.connect()
self.engine = create_engine(DATABASE_NAME, listeners=[MyListener()])
self.table_obj_meta = dict()
self.table_pkey_meta = dict()
except Exception as e:
pass
logger.exception("Exception in creating db obj: {}".format(str(e)))

def create_tables(self):
metadata = MetaData()
Expand Down Expand Up @@ -247,10 +252,8 @@ def create_tables(self):

self.servicechecks = Table(
self.SERVICECHECKS_TABLE_NAME, metadata,
Column('check_id', String,
primary_key=True),
Column('service_id', String, ForeignKey(
self.service.c.service_id), primary_key=True),
Column('check_id', String, primary_key=True),
Column('service_id', String, ForeignKey(self.service.c.service_id), primary_key=True),
Column('service_name', String),
Column('name', String),
Column('type', String),
Expand Down Expand Up @@ -290,9 +293,10 @@ def create_tables(self):
Column('EPG', String),
Column('BD', String),
Column('contracts', PickleType),
Column('VRF', String),
Column('vrf', String),
Column('epg_health', String),
Column('app_profile', String),
Column('epg_alias', String),
Column('created_ts', DateTime),
Column('updated_ts', DateTime),
Column('last_checked_ts', DateTime)
Expand Down Expand Up @@ -394,7 +398,7 @@ def create_tables(self):
Column('EPG', String),
Column('BD', String),
Column('contracts', PickleType),
Column('VRF', String),
Column('vrf', String),
Column('epg_health', String),
Column('app_profile', String),
Column('created_ts', DateTime),
Expand Down Expand Up @@ -469,24 +473,42 @@ def create_tables(self):
logger.exception("Exception in {} Error:{}".format(
'create_tables()', str(e)))


def insert_into_table(self, table_name, field_values):
def insert_into_table(self, connection, table_name, field_values):
field_values = list(field_values)
try:
ins = None
table_name = table_name.lower()
field_values.append(datetime.now())
ins = self.table_obj_meta[table_name].insert().values(field_values)
if ins != None:
self.conn.execute(ins)
if ins is not None:
connection.execute(ins)
return True
except Exception as e:
logger.exception(
"Exception in data insertion in {} Error:{}".format(table_name, str(e)))
return False

def select_eps_from_mapping(self, connection, tn, is_enabled):
try:
result = connection.execute(
"Select ip from mapping where enabled=" + str(is_enabled) + " and tenant='" + tn + "'")
return result
except Exception as e:
logger.exception("Exception in selecting data from {} Error:{}".format(self.MAPPING_TABLE_NAME, str(e)))
return None

def select_from_ep_with_tenant(self, connection, tn):
try:
table_obj = self.table_obj_meta[self.EP_TABLE_NAME]
select_query = table_obj.select()
select_query = select_query.where(('tenant' == tn))
result = connection.execute("Select * from ep where tenant='" + tn + "'")
return result
except Exception as e:
logger.exception("Exception in selecting data from {} Error:{}".format(self.EP_TABLE_NAME, str(e)))
return None

def select_from_table(self, table_name, primary_key={}):
def select_from_table(self, connection, table_name, primary_key={}):
try:
select_query = None
table_name = table_name.lower()
Expand All @@ -499,16 +521,14 @@ def select_from_table(self, table_name, primary_key={}):
else:
select_query = self.table_obj_meta[table_name].select()

if select_query != None:
result = self.conn.execute(select_query)
return result
if select_query is not None:
result = connection.execute(select_query)
return result.fetchall()
except Exception as e:
logger.exception(
"Exception in selecting data from {} Error:{}".format(table_name, str(e)))
logger.exception("Exception in selecting data from {} Error:{}".format(table_name, str(e)))
return None


def update_in_table(self, table_name, primary_key, new_record_dict):
def update_in_table(self, connection, table_name, primary_key, new_record_dict):
try:
table_name = table_name.lower()
table_obj = self.table_obj_meta[table_name]
Expand All @@ -518,15 +538,14 @@ def update_in_table(self, table_name, primary_key, new_record_dict):
update_query = update_query.where(
self.table_pkey_meta[table_name][key] == primary_key[key])
update_query = update_query.values(new_record_dict)
self.conn.execute(update_query)
connection.execute(update_query)
return True
except Exception as e:
logger.exception(
"Exception in updating {} Error:{}".format(table_name, str(e)))
return False


def delete_from_table(self, table_name, primary_key={}):
def delete_from_table(self, connection, table_name, primary_key={}):
try:
table_name = table_name.lower()
if primary_key:
Expand All @@ -537,21 +556,20 @@ def delete_from_table(self, table_name, primary_key={}):
self.table_pkey_meta[table_name][key] == primary_key[key])
else:
delete_query = self.table_obj_meta[table_name].delete()
self.conn.execute(delete_query)
connection.execute(delete_query)
return True
except Exception as e:
logger.exception(
"Exception in deletion from {} Error:{}".format(table_name, str(e)))
return False


def insert_and_update(self, table_name, new_record, primary_key={}):
def insert_and_update(self, connection, table_name, new_record, primary_key={}):
table_name = table_name.lower()
if primary_key:
old_data = self.select_from_table(table_name, primary_key)
if old_data:
old_data = old_data.fetchone()
if old_data:
old_data = self.select_from_table(connection, table_name, primary_key)
if old_data != None:
if len(old_data) > 0:
old_data = old_data[0]
new_record_dict = dict()
index = []
for i in range(len(new_record)):
Expand All @@ -565,71 +583,66 @@ def insert_and_update(self, table_name, new_record, primary_key={}):

if new_record_dict:
new_record_dict['updated_ts'] = datetime.now()
self.update_in_table(table_name, primary_key, new_record_dict)
self.update_in_table(connection, table_name, primary_key, new_record_dict)
else:
self.insert_into_table(table_name, new_record)
self.insert_into_table(connection, table_name, new_record)
else:
return False
else:
self.insert_into_table(table_name, new_record)
self.insert_into_table(connection, table_name, new_record)
return True


def get_join_obj(self, table_name1, table_name2, datacenter=None):
try:
table_name1 = table_name1.lower()
table_name2 = table_name2.lower()
obj1 = self.table_obj_meta[table_name1]
obj2 = self.table_obj_meta[table_name2]
if datacenter:
join_obj = obj1.join(obj2,isouter=True)
join_obj = obj1.join(obj2, isouter=True)
else:
join_obj = obj1.join(obj2, obj1.c.dn == obj2.c.dn,isouter=True)
join_obj = obj1.join(obj2, obj1.c.dn == obj2.c.dn, isouter=True)
return join_obj
except Exception as e:
logger.exception(
"Exception in joining tables: {} & {}, Error: {}".format(table_name1, table_name2, str(e)))
return None


def join(self, datacenter=None, tenant=None):
def join(self, connection, datacenter=None, tenant=None):
try:
if datacenter:
obj1 = self.get_join_obj("node", "nodechecks", datacenter)
obj2 = self.get_join_obj("service", "servicechecks", datacenter)
join_obj = obj1.join(obj2)
smt = select([self.node,self.service,self.nodechecks,self.servicechecks]).select_from(join_obj)
smt = select([self.node, self.service, self.nodechecks, self.servicechecks]).select_from(join_obj)
elif tenant:
join_obj = self.get_join_obj("ep", "epg")
smt = select([self.ep, self.epg]).select_from(join_obj)
result = self.conn.execute(smt)
result = connection.execute(smt)
return result
except Exception as e:
logger.exception(
"Exception in join, Error: {}".format(str(e)))
return None


def join_formatter(self,result):
if result == None:

def join_formatter(self, result):
if result is None:
return []
return_list = []
for each in result.fetchall():
return_list.append(
{
'node_id':each[0],
'node_name':each[1],
'node_ips':each[2],
'node_check':each[28],
'service_id':each[7],
'service_name':each[9],
'service_ip':each[10],
'service_port':each[11],
'service_address':each[12],
'service_tags':each[13],
'service_kind':each[14],
'service_namespace':each[15],
'service_checks':each[39]
}
)
return return_list
for each in result:
return_list.append({
'node_id': each[0],
'node_name': each[1],
'node_ips': each[2],
'node_check': each[28],
'service_id': each[7],
'service_name': each[9],
'service_ip': each[10],
'service_port': each[11],
'service_address': each[12],
'service_tags': each[13],
'service_kind': each[14],
'service_namespace': each[15],
'service_checks': each[39]
})
return return_list
Loading

0 comments on commit f27493e

Please sign in to comment.