title | tagTitle | tagDescription | date | lead |
---|---|---|---|---|
Rates |
Rates - Center for Computation and Visualization |
Check rates for advanced research computing that require extra resources. |
2020-08-05 18:04:51 +0000 |
We provide services with limited resources at no cost to all members affiliated with Brown. For advanced computing that requires extra resources, we charge a monthly fee. See below the rates for FY24. |
The number and size of jobs allowed on Oscar vary with both partition and type of user account. The following partitions are available to all Oscar users:
- Batch - General Purpose Computing
- GPU - GPU Nodes
- BigMem - Large Memory Nodes
Account Type | Partition | CPU Cores | Memory(GB) | GPU | Max Walltime* (Hours) | Cost per Month |
---|---|---|---|---|---|---|
Exploratory | batch gpu bigmem |
64 12 32 |
492 192 752 |
None 2 Std. None |
48 | $0 |
HPC Priority | batch | 208 | 1,500 | 2 Std. | 96 | $67 |
HPC Priority+ | batch | 416 | 3,000 | 2 Std. | 96 | $133 |
Standard GPU Priority | gpu | 24 | 192 | 4 Std. | 96 | $67 |
Standard GPU Priority+ | gpu | 48 | 384 | 8 Std. | 96 | $133 |
High End GPU Priority | gpu-he | 24 | 256 | 4 high-end | 96 | $133 |
Large Memory Priority | bigmem | 32 | 2TB | - | 96 | $33 |
- Note, these values are subject to periodic review and changes
- Each account is assigned 100G Home, 512G Scratch (purged every 30 days).
- Priority accounts and Exploratory accounts associated with a PI get a data directory.
- Exploratory accounts without a PI have no data directory provided.
- Priority accounts have a higher Quality-of-Service (QOS) i.e. priority accounts will have faster job start times.
- The maximum number of cores and duration may change based on cluster utilization.
- HPC Priority account has a Quality-of-Service (QOS) allowing up to 208 cores, 1TB memory, and a total per-job limit of 1,198,080 core-minutes. This allows a 208-core job to run for 96 hours, a 104-core job to run for 192 hours, or 208 1-core jobs to run for 96 hours.
- Exploratory account has a Quality-of-Service (QOS) allowing up to 2 GPUs and a total of 5760 GPU-minutes. This allows a 2 GPU job to run for 48 hours or 1 GPU job to run for 96 hours.
- GPU Definitions:
- Std - QuadroRTX or lower
- High End - Tesla V100
- For more technical details, please see this link.
An investigator may choose to purchase a condo to support their high performance computing needs.
- 5 year lifecycle - condo resources will be available for a duration of 5 years.
- Access to more cpu cores than purchased - condo will have access to 1.25 times the number of cpu cores purchased for the first 3 years of its lifecycle. For the remaining 2 years, condo will have access to the same number of cpu cores purchased.
- Support - CCV staff will install, upgrade and maintain condo hardware throughout its lifecycle.
- High job priority - Jobs submitted to a condo have the highest priority to schedule.
- Contact [email protected] to discuss your needs and review purchase options
Support Level | Description | Cost |
---|---|---|
Advanced Support | Limited code troubleshooting, training, office-hours. Limited to 1 week per year | $0 |
General Support | Any staff services requiring more than 1 week's effort per year | $85/hour |
Project Collaboration | Percent time of a specific staff member charged directly to the grant | %FTE |
- 1TB per Brown Faculty Member - Free
- 10TB per awarded Grant at the request of the Brown PI - an active grant account number will be required to provide this allocation and the data will be migrated to archive storage at the end of the grant.
- Additional Storage Allocation
- Rdata: $8.3 / Terabyte / Month ($100 / Terabyte / Year)
- Stronghold Storage: $8.3 / Terabyte / Month ($100 / Terabyte / Year)
- Campus File Storage (replicated): $8.3 / Terabyte / Month ($100 / Terabyte / Year)
- Campus File Storage (non-replicated): $4.2 / Terabyte / Month ($50 / Terabyte / Year)
The transition from an essentially free service for researchers to one that anticipates that some of Brown's storage costs will be absorbed by other sources will be challenging for some researchers. Some research groups have proposed a variation on the individual payment model in order to smooth out these challenges. In this group plan, all the researchers' individual and grant allocations are pooled under the umbrella of the group; then, the billing takes place at the group/center/department level. CCV will be happy to accommodate a group payment/billing plan.
In order to do this, interested departments/centers/institutes should generate a list of researchers associated with the group as well as all grants with end dates for any PIs in the group. CCV will then be able to generate a new group-level bill. In addition, please let us know who is handling the invoicing.