Skip to content

Actions: triton-inference-server/vllm_backend

CodeQL

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
330 workflow runs
330 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Tests' re-org
CodeQL #146: Pull request #39 synchronize by oandreeva-nv
April 30, 2024 02:06 2m 50s oandreeva_vllm_tests_fix
April 30, 2024 02:06 2m 50s
Milti-Lora docs follow up
CodeQL #145: Pull request #40 opened by oandreeva-nv
April 26, 2024 18:46 2m 45s oandreeva_multilora_followup
April 26, 2024 18:46 2m 45s
Tests' re-org
CodeQL #144: Pull request #39 opened by oandreeva-nv
April 26, 2024 00:25 2m 13s oandreeva_vllm_tests_fix
April 26, 2024 00:25 2m 13s
Add multi-lora support for Triton vLLM backend
CodeQL #143: Pull request #23 synchronize by l1cacheDell
April 13, 2024 05:07 2m 16s l1cacheDell:main
April 13, 2024 05:07 2m 16s
Updated vllm version to v0.4.0.post1
CodeQL #140: Pull request #38 opened by oandreeva-nv
April 10, 2024 21:27 2m 15s oandreeva_vllm_update0.4.0
April 10, 2024 21:27 2m 15s
Add multi-lora support for Triton vLLM backend
CodeQL #139: Pull request #23 synchronize by l1cacheDell
April 10, 2024 02:53 2m 33s l1cacheDell:main
April 10, 2024 02:53 2m 33s
Support min_tokens parameter
CodeQL #137: Pull request #37 opened by dyastremsky
April 9, 2024 17:51 2m 18s dyas-vllm-version
April 9, 2024 17:51 2m 18s
Add multi-lora support for Triton vLLM backend
CodeQL #136: Pull request #23 synchronize by l1cacheDell
April 9, 2024 03:53 2m 47s l1cacheDell:main
April 9, 2024 03:53 2m 47s
Demonstrate passing "max_tokens" param
CodeQL #127: Pull request #34 synchronize by mkhludnev
March 1, 2024 06:07 2m 26s mkhludnev:patch-2
March 1, 2024 06:07 2m 26s
Update 'main' post-24.02
CodeQL #126: Pull request #36 synchronize by mc-nv
March 1, 2024 01:46 2m 23s mchornyi-post-24.02
March 1, 2024 01:46 2m 23s
Update 'main' post-24.02
CodeQL #125: Pull request #36 synchronize by mc-nv
March 1, 2024 01:46 2m 18s mchornyi-post-24.02
March 1, 2024 01:46 2m 18s
Add exclude_input_in_output option to vllm backend
CodeQL #124: Pull request #35 synchronize by oandreeva-nv
February 29, 2024 23:01 2m 19s oandreeva_echo_option
February 29, 2024 23:01 2m 19s
Update 'main' post-24.02
CodeQL #123: Pull request #36 opened by mc-nv
February 29, 2024 21:28 2m 31s mchornyi-post-24.02
February 29, 2024 21:28 2m 31s
Add exclude_input_in_output option to vllm backend
CodeQL #122: Pull request #35 synchronize by oandreeva-nv
February 29, 2024 18:30 2m 22s oandreeva_echo_option
February 29, 2024 18:30 2m 22s
Demonstrate passing "max_tokens" param
CodeQL #121: Pull request #34 synchronize by mkhludnev
February 29, 2024 06:19 2m 17s mkhludnev:patch-2
February 29, 2024 06:19 2m 17s
Demonstrate passing "max_tokens" param
CodeQL #120: Pull request #34 synchronize by mkhludnev
February 27, 2024 19:44 2m 25s mkhludnev:patch-2
February 27, 2024 19:44 2m 25s
Add exclude_input_in_output option to vllm backend
CodeQL #119: Pull request #35 opened by oandreeva-nv
February 27, 2024 19:02 2m 45s oandreeva_echo_option
February 27, 2024 19:02 2m 45s
Demonstrate passing "max_tokens" param
CodeQL #118: Pull request #34 synchronize by mkhludnev
February 16, 2024 06:25 3m 22s mkhludnev:patch-2
February 16, 2024 06:25 3m 22s
Update README and versions for 24.02 branch
CodeQL #116: Pull request #33 opened by oandreeva-nv
February 12, 2024 23:21 2m 20s oandreeva-24.02-readme
February 12, 2024 23:21 2m 20s
Update to latest version
CodeQL #115: Pull request #32 opened by oandreeva-nv
February 12, 2024 22:52 2m 18s oandreeva_update_docs_2401
February 12, 2024 22:52 2m 18s
ProTip! You can narrow down the results and go further in time using created:<2024-02-07 or the other filters available.