-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update docs to showcase cleanup of Connector
object
#914
Comments
Hi @lauraseidler, thanks for opening an issue on the Cloud SQL Python Connector 😄 Let me see if I am understanding the issue correctly...
So all the queries are running successfully without any errors, you are only seeing the errors surfaced when the application shuts down? This would probably hint at the
Details in our README:
Details in our README: # initialize Cloud SQL Python Connector as context manager
with Connector() as connector: When you say " Let me know if this helps and if it does I can update the sample or documentation as needed. 😄 |
Hi @jackwotherspoon, I think we're currently not explicitly (or implicitly) closing the connector object, only the connection itself. I hadn't really checked the README that far (my bad), as the first part looked identical to the GCP documentation - but that one doesn't mention this part, so we never included it. It sounds like this might indeed be the issue and it would make sense to me, so I will try and see if it changes things and report back, thanks!
Yes, when Cloud Run scales down. This is especially noticeable when we roll out a new version and the old version has been running for a while, and errors have "accumulated" over multiple instances that get scaled down in rapid succession. |
Hi @lauraseidler, let me know if closing the
If closing the Connector does resolve the issue I will update this issue to track updating the code sample used in the docs to include closing the |
@jackwotherspoon Hello, I am having similar issues with my FastAPI application.
and
and
Here is my code for db connector and session: from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.asyncio.session import AsyncSession
import asyncpg
from sqlalchemy.ext.asyncio import AsyncEngine, create_async_engine
from google.cloud.sql.connector import Connector, create_async_connector
from src.core.config import settings
async def init_connection_pool(connector: Connector) -> AsyncEngine:
async def getconn() -> asyncpg.Connection:
conn: asyncpg.Connection = await connector.connect_async(
settings.POSTGRES_CONN_NAME,
"asyncpg",
user=settings.POSTGRES_USER,
password=settings.POSTGRES_PASSWORD,
db=settings.POSTGRES_DB,
)
return conn
pool = create_async_engine(
"postgresql+asyncpg://",
async_creator=getconn,
pool_size=20,
max_overflow=10,
pool_timeout=10,
pool_recycle=1200,
)
return pool
async def get_session():
# initialize Connector object for connections to Cloud SQL
connector = await create_async_connector()
# initialize connection pool
engine = await init_connection_pool(connector)
async_session = sessionmaker(bind=engine, class_=AsyncSession, expire_on_commit=False)
async with async_session() as session:
try:
yield session
finally:
await session.close() I am running this app on Cloud Run and receive a lot of these errors. Is there any way to get rid of them? |
@NickNaskida Yes it seems you are running into the same issue where the To close the
async def get_session():
# initialize Connector object for connections to Cloud SQL
connector = await create_async_connector()
# initialize connection pool
engine = await init_connection_pool(connector)
async_session = sessionmaker(bind=engine, class_=AsyncSession, expire_on_commit=False)
async with async_session() as session:
try:
yield session
finally:
await session.close()
# explicitly close connector
await connector.close_async()
async def get_session():
# initialize Connector as async context manager
loop = asyncio.get_running_loop()
async with Connector(loop=loop) as connector:
# initialize connection pool
engine = await init_connection_pool(connector)
async_session = sessionmaker(bind=engine, class_=AsyncSession, expire_on_commit=False)
async with async_session() as session:
try:
yield session
finally:
await session.close() |
Connector
object
I've updated this issue to now reflect updating the code samples to properly document cleaning up the |
@jackwotherspoon thanks for the quick response. I applied your changes yesterday and some of the issues were resolved, however, this error was logged out today
My UPDATE: I upgraded |
Hey @jackwotherspoon, so the issue that I posted above still exists even after updating aiohttp. I use the first approach that you suggested above ( I think this issue is somehow also related to too many idle connections on my database. I currently have this problem that I didn't manage to solve. Because of this I usually get this error: my engine config pool = create_async_engine(
"postgresql+asyncpg://",
async_creator=getconn,
pool_size=5,
max_overflow=10,
pool_timeout=10,
pool_recycle=1200,
)
return pool I believe this happens because the async def get_session():
# initialize Connector object for connections to Cloud SQL
connector = await create_async_connector()
# initialize connection pool
engine = await init_connection_pool(connector)
async_session = sessionmaker(bind=engine, class_=AsyncSession, expire_on_commit=False)
async with async_session() as session:
try:
yield session
finally:
await session.close()
# explicitly close connector
await connector.close_async() |
@NickNaskida The error you are seeing is indeed most likely you are hitting the max number of idle connections allowed by Cloud SQL. This is normally due to confusion around the use of connection pooling and it not being properly configured as you pointed out. This is most likely no longer related to the clean up of the
You would be correct, sorry I should have caught that previously but my FastAPI knowledge is limited. If you are calling A couple tips for Cloud Run to optimize performance with the Cloud SQL Connectors is to lazy init the database The lazy init strategy works really with FastAPI's lifespan event 🤞 : from contextlib import asynccontextmanager
from fastapi import FastAPI
from sqlalchemy.orm import sessionmaker
# global engine variable to be shared across sessions
engine = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global engine
# initialize Connector object for connections to Cloud SQL
connector = await create_async_connector()
# init the engine
engine = await init_connection_pool(connector)
yield
# clean up the Cloud SQL Connector
await connector.close_async()
app = FastAPI(lifespan=lifespan)
async def get_session():
global engine
async_session = sessionmaker(bind=engine, class_=AsyncSession, expire_on_commit=False)
async with async_session() as session:
try:
yield session
finally:
await session.close() This will initialize an |
@jackwotherspoon That was it! Thank you very much! PS. You should definitely add this to README & docs because I searched a lot of stuff on the web during the weekend and didn't find anything like this. |
Improved the |
Seems like the garbage collection improvement by @jackwotherspoon does not fix the issue (at least not for us). We had to close the connector manually and do it via connector = Connector()
atexit.register(lambda: connector.close()) |
Bug Description
We use the connector with IAM Auth + Cloud SQL for Postgres. It generally works okay, but we are occasionally seeing errors on shut down of our application server that look like this:
The exception itself may vary - mostly
aiohttp.client_exceptions.ClientOSError: [Errno 32] Broken pipe
, but we've also seenaiohttp.client_exceptions.ServerDisconnectedError: Server disconnected
.To me, this looks like an async task is not checked for exceptions, even though at least the one in the referenced line looks okay:
cloud-sql-python-connector/google/cloud/sql/connector/instance.py
Line 376 in 406b383
Since this means these errors are only logged when the application shuts down, this makes it somewhat hard to debug what's causing these connection issues, and if they are causing actual issues or the connection is re-established successfully.
Example code (or command)
No response
Stacktrace
No response
Steps to reproduce?
Environment
python:3.10.11-slim
docker image)Additional Details
No response
The text was updated successfully, but these errors were encountered: