-
-
Notifications
You must be signed in to change notification settings - Fork 255
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Kye Gomez
authored and
Kye Gomez
committed
Jul 14, 2024
1 parent
9f72701
commit fa1e927
Showing
11 changed files
with
945 additions
and
102 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,244 @@ | ||
# How to Run Tests Using Pytest: A Comprehensive Guide | ||
|
||
In modern software development, automated testing is crucial for ensuring the reliability and functionality of your code. One of the most popular testing frameworks for Python is `pytest`. | ||
|
||
This blog will provide an in-depth look at how to run tests using `pytest`, including testing a single file, multiple files, every file in the test repository, and providing guidelines for contributors to run tests reliably. | ||
|
||
## What is Pytest? | ||
|
||
`pytest` is a testing framework for Python that makes it easy to write simple and scalable test cases. It supports fixtures, parameterized testing, and has a rich plugin architecture. `pytest` is widely used because of its ease of use and powerful features that help streamline the testing process. | ||
|
||
## Installation | ||
|
||
To get started with `pytest`, you need to install it. You can install `pytest` using `pip`: | ||
|
||
```bash | ||
pip install pytest | ||
``` | ||
|
||
## Writing Your First Test | ||
|
||
Before diving into running tests, let’s write a simple test. Create a file named `test_sample.py` with the following content: | ||
|
||
```python | ||
def test_addition(): | ||
assert 1 + 1 == 2 | ||
|
||
def test_subtraction(): | ||
assert 2 - 1 == 1 | ||
``` | ||
|
||
In this example, we have defined two basic tests: `test_addition` and `test_subtraction`. | ||
|
||
## Running Tests | ||
|
||
### Running a Single Test File | ||
|
||
To run a single test file, you can use the `pytest` command followed by the filename. For example, to run the tests in `test_sample.py`, use the following command: | ||
|
||
```bash | ||
pytest test_sample.py | ||
``` | ||
|
||
The output will show the test results, including the number of tests passed, failed, or skipped. | ||
|
||
### Running Multiple Test Files | ||
|
||
You can also run multiple test files by specifying their filenames separated by a space. For example: | ||
|
||
```bash | ||
pytest test_sample.py test_another_sample.py | ||
``` | ||
|
||
If you have multiple test files in a directory, you can run all of them by specifying the directory name: | ||
|
||
```bash | ||
pytest tests/ | ||
``` | ||
|
||
### Running All Tests in the Repository | ||
|
||
To run all tests in the repository, navigate to the root directory of your project and simply run: | ||
|
||
```bash | ||
pytest | ||
``` | ||
|
||
`pytest` will automatically discover and run all the test files that match the pattern `test_*.py` or `*_test.py`. | ||
|
||
### Test Discovery | ||
|
||
`pytest` automatically discovers test files and test functions based on their naming conventions. By default, it looks for files that match the pattern `test_*.py` or `*_test.py` and functions or methods that start with `test_`. | ||
|
||
### Using Markers | ||
|
||
`pytest` allows you to use markers to group tests or add metadata to them. Markers can be used to run specific subsets of tests. For example, you can mark a test as `slow` and then run only the slow tests or skip them. | ||
|
||
```python | ||
import pytest | ||
|
||
@pytest.mark.slow | ||
def test_long_running(): | ||
import time | ||
time.sleep(5) | ||
assert True | ||
|
||
def test_fast(): | ||
assert True | ||
``` | ||
|
||
To run only the tests marked as `slow`, use the `-m` option: | ||
|
||
```bash | ||
pytest -m slow | ||
``` | ||
|
||
### Parameterized Tests | ||
|
||
`pytest` supports parameterized testing, which allows you to run a test with different sets of input data. This can be done using the `@pytest.mark.parametrize` decorator. | ||
|
||
```python | ||
import pytest | ||
|
||
@pytest.mark.parametrize("a,b,expected", [ | ||
(1, 2, 3), | ||
(2, 3, 5), | ||
(3, 5, 8), | ||
]) | ||
def test_add(a, b, expected): | ||
assert a + b == expected | ||
``` | ||
|
||
In this example, `test_add` will run three times with different sets of input data. | ||
|
||
### Fixtures | ||
|
||
Fixtures are a powerful feature of `pytest` that allow you to set up some context for your tests. They can be used to provide a fixed baseline upon which tests can reliably and repeatedly execute. | ||
|
||
```python | ||
import pytest | ||
|
||
@pytest.fixture | ||
def sample_data(): | ||
return {"name": "John", "age": 30} | ||
|
||
def test_sample_data(sample_data): | ||
assert sample_data["name"] == "John" | ||
assert sample_data["age"] == 30 | ||
``` | ||
|
||
Fixtures can be used to share setup and teardown code between tests. | ||
|
||
## Advanced Usage | ||
|
||
### Running Tests in Parallel | ||
|
||
`pytest` can run tests in parallel using the `pytest-xdist` plugin. To install `pytest-xdist`, run: | ||
|
||
```bash | ||
pip install pytest-xdist | ||
``` | ||
|
||
To run tests in parallel, use the `-n` option followed by the number of CPU cores you want to use: | ||
|
||
```bash | ||
pytest -n 4 | ||
``` | ||
|
||
### Generating Test Reports | ||
|
||
`pytest` can generate detailed test reports. You can use the `--html` option to generate an HTML report: | ||
|
||
```bash | ||
pip install pytest-html | ||
pytest --html=report.html | ||
``` | ||
|
||
This command will generate a file named `report.html` with a detailed report of the test results. | ||
|
||
### Code Coverage | ||
|
||
You can use the `pytest-cov` plugin to measure code coverage. To install `pytest-cov`, run: | ||
|
||
```bash | ||
pip install pytest-cov | ||
``` | ||
|
||
To generate a coverage report, use the `--cov` option followed by the module name: | ||
|
||
```bash | ||
pytest --cov=my_module | ||
``` | ||
|
||
This command will show the coverage summary in the terminal. You can also generate an HTML report: | ||
|
||
```bash | ||
pytest --cov=my_module --cov-report=html | ||
``` | ||
|
||
The coverage report will be generated in the `htmlcov` directory. | ||
|
||
## Best Practices for Writing Tests | ||
|
||
1. **Write Clear and Concise Tests**: Each test should focus on a single piece of functionality. | ||
2. **Use Descriptive Names**: Test function names should clearly describe what they are testing. | ||
3. **Keep Tests Independent**: Tests should not depend on each other and should run in isolation. | ||
4. **Use Fixtures**: Use fixtures to set up the context for your tests. | ||
5. **Mock External Dependencies**: Use mocking to isolate the code under test from external dependencies. | ||
|
||
## Running Tests Reliably | ||
|
||
For contributors and team members, it’s important to run tests reliably to ensure consistent results. Here are some guidelines: | ||
|
||
1. **Set Up a Virtual Environment**: Use a virtual environment to manage dependencies and ensure a consistent testing environment. | ||
|
||
```bash | ||
python -m venv venv | ||
source venv/bin/activate # On Windows use `venv\Scripts\activate` | ||
``` | ||
|
||
2. **Install Dependencies**: Install all required dependencies from the `requirements.txt` file. | ||
|
||
```bash | ||
pip install -r requirements.txt | ||
``` | ||
|
||
3. **Run Tests Before Pushing**: Ensure all tests pass before pushing code to the repository. | ||
|
||
4. **Use Continuous Integration (CI)**: Set up CI pipelines to automatically run tests on each commit or pull request. | ||
|
||
### Example CI Configuration (GitHub Actions) | ||
|
||
Here is an example of a GitHub Actions workflow to run tests using `pytest`: | ||
|
||
```yaml | ||
name: Python package | ||
on: [push, pull_request] | ||
jobs: | ||
build: | ||
runs-on: ubuntu-latest | ||
steps: | ||
- uses: actions/checkout@v2 | ||
- name: Set up Python | ||
uses: actions/setup-python@v2 | ||
with: | ||
python-version: '3.8' | ||
- name: Install dependencies | ||
run: | | ||
python -m pip install --upgrade pip | ||
pip install -r requirements.txt | ||
- name: Run tests | ||
run: | | ||
pytest | ||
``` | ||
|
||
This configuration will run the tests on every push and pull request, ensuring that your codebase remains stable. | ||
|
||
## Conclusion | ||
|
||
`pytest` is a powerful and flexible testing framework that makes it easy to write and run tests for your Python code. By following the guidelines and best practices outlined in this blog, you can ensure that your tests are reliable and your codebase is robust. Whether you are testing a single file, multiple files, or the entire repository, `pytest` provides the tools you need to automate and streamline your testing process. | ||
|
||
Happy testing! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,89 @@ | ||
from swarms import OpenAIChat, AgentRearrange, Agent | ||
from swarms.prompts.finance_agent_sys_prompt import ( | ||
FINANCIAL_AGENT_SYS_PROMPT, | ||
) | ||
from swarms.utils.data_to_text import data_to_text | ||
|
||
model = OpenAIChat(max_tokens=3000) | ||
|
||
# Initialize the agent | ||
receipt_analyzer_agent = Agent( | ||
agent_name="Receipt Analyzer", | ||
system_prompt=FINANCIAL_AGENT_SYS_PROMPT, | ||
llm=model, | ||
max_loops=1, | ||
autosave=True, | ||
# dynamic_temperature_enabled=True, | ||
dashboard=False, | ||
verbose=True, | ||
streaming_on=True, | ||
# interactive=True, # Set to False to disable interactive mode | ||
dynamic_temperature_enabled=True, | ||
saved_state_path="finance_agent.json", | ||
# tools=[Add your functions here# ], | ||
# stopping_token="Stop!", | ||
# interactive=True, | ||
# docs_folder="docs", # Enter your folder name | ||
# pdf_path="docs/finance_agent.pdf", | ||
# sop="Calculate the profit for a company.", | ||
# sop_list=["Calculate the profit for a company."], | ||
user_name="swarms_corp", | ||
# # docs= | ||
# # docs_folder="docs", | ||
retry_attempts=3, | ||
# tool_schema = dict | ||
# agent_ops_on=True, | ||
# long_term_memory=ChromaDB(docs_folder="artifacts"), | ||
# multi_modal=True | ||
) | ||
|
||
|
||
# 2nd agent | ||
analyst_agent = Agent( | ||
agent_name="Analyst_Agent", | ||
system_prompt=FINANCIAL_AGENT_SYS_PROMPT, | ||
llm=model, | ||
max_loops=1, | ||
autosave=True, | ||
# dynamic_temperature_enabled=True, | ||
dashboard=False, | ||
verbose=True, | ||
streaming_on=True, | ||
# interactive=True, # Set to False to disable interactive mode | ||
dynamic_temperature_enabled=True, | ||
saved_state_path="finance_agent.json", | ||
# tools=[Add your functions here# ], | ||
# stopping_token="Stop!", | ||
# interactive=True, | ||
# docs_folder="docs", # Enter your folder name | ||
# pdf_path="docs/finance_agent.pdf", | ||
# sop="Calculate the profit for a company.", | ||
# sop_list=["Calculate the profit for a company."], | ||
user_name="swarms_corp", | ||
# # docs= | ||
# # docs_folder="docs", | ||
retry_attempts=3, | ||
# tool_schema = dict | ||
# agent_ops_on=True, | ||
# long_term_memory=ChromaDB(docs_folder="artifacts"), | ||
# multi_modal=True, | ||
) | ||
|
||
|
||
# sWARM | ||
agents = [receipt_analyzer_agent, analyst_agent] | ||
|
||
# Flow | ||
flow = f"{receipt_analyzer_agent.agent_name} -> {analyst_agent.agent_name} -> H" | ||
pdf = data_to_text("receipt.pdf") | ||
|
||
# Swarm | ||
swarm = AgentRearrange( | ||
agents=agents, | ||
flow=flow, | ||
) | ||
|
||
# Run the swarm | ||
swarm.run( | ||
f"Analyze this PDF: {pdf} and return a summary of the expense and if it's necessary" | ||
) |
Oops, something went wrong.