Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge from upstream #45

Merged
merged 66 commits into from
Nov 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
b8deef0
llama : add <|tool_call|> formatting to Granite template (#10177)
gabe-l-hart Nov 5, 2024
a1eaf6a
metal : add quantized FA support (#10149)
ggerganov Nov 6, 2024
1dc04b2
ggml : adjust is_first_call init value (#10193)
ggerganov Nov 6, 2024
94d8cb8
metal : fix from ptr buffer name (#10189)
slaren Nov 6, 2024
b11f9ba
server : remove hack for extra parallel slot (#10187)
ggerganov Nov 6, 2024
5c333e0
metal : add BF16 support (#8439)
ggerganov Nov 6, 2024
3bcd40b
Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acc…
uniartisan Nov 7, 2024
2319126
fix q4_0_8_8 format for corrupted tokens issue (#10198)
snadampal Nov 7, 2024
5107e8c
DRY: Fixes clone functionality (#10192)
wwoodsTM Nov 7, 2024
60e17ce
Remove identical wte/etw logic for jais (#10203)
fmz Nov 7, 2024
97404c4
ggml : add ggml-cpu.h to the public headers (#10204)
slaren Nov 7, 2024
a2c6fd7
scripts : sync update
ggerganov Nov 7, 2024
3b08828
sync : ggml
ggerganov Nov 7, 2024
eec4d71
scripts : add amx to sync-ggml.sh [no ci]
ggerganov Nov 7, 2024
a71d81c
server : revamp chat UI with vuejs and daisyui (#10175)
ngxson Nov 7, 2024
76c6e7f
server : minor UI fix (#10207)
ngxson Nov 7, 2024
d05b312
swift : exclude ggml-metal-embed.metal (#10211)
jhen0409 Nov 8, 2024
841f27a
metal : optimize FA kernels (#10171)
ggerganov Nov 8, 2024
695ad75
metal : improve clarity (minor) (#10171)
ggerganov Nov 8, 2024
ec450d3
metal : opt-in compile flag for BF16 (#10218)
ggerganov Nov 8, 2024
8fc393f
scripts : fix pattern and get n_tokens in one go (#10221)
lhpqaq Nov 9, 2024
e892134
ggml : optimize llamafile cpu matrix multiplication for ppc64le (#10156)
amritahs-ibm Nov 9, 2024
5b359bb
ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL oper…
SongXiaoXi Nov 9, 2024
46323fa
metal : hide debug messages from normal log
ggerganov Nov 9, 2024
f018acb
llama : fix Qwen model type strings
ggerganov Nov 9, 2024
bb38cdd
metal : fix F32 accumulation in FA vec kernel (#10232)
ggerganov Nov 9, 2024
39a334a
metal : fix build and some more comments (#10229)
ggerganov Nov 9, 2024
6423c65
metal : reorder write loop in mul mat kernel + style (#10231)
ggerganov Nov 9, 2024
160687b
vulkan: Fix newly added tests for permuted mul_mat and 1D im2col (#10…
jeffbolznv Nov 10, 2024
505f332
server : (web UI) Add back sampler settings (#10239)
MaggotHATE Nov 10, 2024
4b3a921
flake.lock: Update (#10243)
ggerganov Nov 10, 2024
b141e5f
server : enable KV cache defrag by default (#10233)
ggerganov Nov 11, 2024
b0cefea
metal : more precise Q*K in FA vec kernel (#10247)
ggerganov Nov 11, 2024
54ef9cf
vulkan: Throttle the number of shader compiles during the build step.…
jeffbolznv Nov 11, 2024
80dd7ff
vulkan: Optimize contiguous copies (#10254)
jeffbolznv Nov 13, 2024
2e82ffa
sycl : Fixes to broken builds and test-backend-ops (#10257)
Alcpz Nov 13, 2024
a0ec17b
metadata: Detailed Dataset Authorship Metadata (#8875)
mofosyne Nov 13, 2024
0e712a5
server : fix incorrect res in validate_model_chat_template (#10272)
jhen0409 Nov 13, 2024
ff7fb67
server : add missing docs (#10269)
z80maniac Nov 13, 2024
1ee9eea
docs : update bindings list (#10261)
xuegao-tzx Nov 13, 2024
5ea926d
sync : ggml
ggerganov Nov 13, 2024
fb4a0ec
llama : propagate the results of `graph_compute` (#9525)
Xarbirus Nov 13, 2024
66798e4
vulkan: Use macros to make the mat mul pipeline creation more concise…
jeffbolznv Nov 13, 2024
af148c9
vulkan: Optimize binary ops (#10270)
jeffbolznv Nov 14, 2024
2a82891
speculative : fix out-of-bounds access (#10289)
ggerganov Nov 14, 2024
4a8ccb3
CUDA: no -sm row for very small matrices (#10185)
JohannesGaessler Nov 14, 2024
ae8de6d
ggml : build backends as libraries (#10256)
slaren Nov 14, 2024
1607a5e
backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (#9921)
chaxu01 Nov 15, 2024
5a54af4
sycl: Use syclcompat::dp4a (#10267)
Rbiessy Nov 15, 2024
4802ad3
scripts : fix regex in sync [no ci]
ggerganov Nov 15, 2024
231f936
cann: dockerfile and doc adjustment (#10302)
noemotiovon Nov 15, 2024
9901068
server : (web UI) add copy button for code block, fix api key (#10242)
ngxson Nov 15, 2024
57f8355
sycl: Update Intel docker images to use DPC++ 2025.0 (#10305)
Rbiessy Nov 15, 2024
f0204a0
ci: build test musa with cmake (#10298)
yeahdongcn Nov 15, 2024
1842922
AVX BF16 and single scale quant optimizations (#10212)
netrunnereve Nov 15, 2024
cbf5541
sync : ggml
ggerganov Nov 15, 2024
3225008
ggml : vulkan logs (whisper/2547)
thewh1teagle Nov 15, 2024
09ecbcb
cmake : fix ppc64 check (whisper/0)
ggerganov Nov 15, 2024
883d206
ggml : fix some build issues
slaren Nov 15, 2024
4047be7
scripts: update compare-llama-bench.py (#10319)
JohannesGaessler Nov 15, 2024
74d73dc
Make updates to fix issues with clang-cl builds while using AVX512 fl…
Srihari-mcw Nov 15, 2024
89e4caa
llama : save number of parameters and the size in llama_model (#10286)
FirstTimeEZ Nov 16, 2024
1e58ee1
ggml : optimize Q4_0 into Q4_0_X_Y repack (#10324)
eddnjjn Nov 16, 2024
dd3a6ce
vulkan : add cmake preset debug/release (#10306)
FirstTimeEZ Nov 16, 2024
772703c
vulkan: Optimize some mat-vec mul quant shaders (#10296)
jeffbolznv Nov 16, 2024
bce287c
Merge branch 'layla-build' into merge
l3utterfly Nov 16, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .devops/llama-cli-cann.Dockerfile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
ARG ASCEND_VERSION=8.0.rc2.alpha003-910b-openeuler22.03-py3.8

FROM cosdt/cann:$ASCEND_VERSION AS build
FROM ascendai/cann:$ASCEND_VERSION AS build

WORKDIR /app

Expand All @@ -26,7 +26,7 @@ RUN echo "Building with static libs" && \
cmake --build build --config Release --target llama-cli

# TODO: use image with NNRT
FROM cosdt/cann:$ASCEND_VERSION AS runtime
FROM ascendai/cann:$ASCEND_VERSION AS runtime
COPY --from=build /app/build/bin/llama-cli /llama-cli

ENV LC_ALL=C.utf8
Expand Down
9 changes: 5 additions & 4 deletions .devops/llama-cli-cuda.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -23,15 +23,16 @@ RUN if [ "${CUDA_DOCKER_ARCH}" != "default" ]; then \
export CMAKE_ARGS="-DCMAKE_CUDA_ARCHITECTURES=${CUDA_DOCKER_ARCH}"; \
fi && \
cmake -B build -DGGML_CUDA=ON ${CMAKE_ARGS} -DCMAKE_EXE_LINKER_FLAGS=-Wl,--allow-shlib-undefined . && \
cmake --build build --config Release --target llama-cli -j$(nproc)
cmake --build build --config Release --target llama-cli -j$(nproc) && \
mkdir -p /app/lib && \
find build -name "*.so" -exec cp {} /app/lib \;

FROM ${BASE_CUDA_RUN_CONTAINER} AS runtime

RUN apt-get update && \
apt-get install -y libgomp1

COPY --from=build /app/build/ggml/src/libggml.so /libggml.so
COPY --from=build /app/build/src/libllama.so /libllama.so
COPY --from=build /app/build/bin/llama-cli /llama-cli
COPY --from=build /app/lib/ /
COPY --from=build /app/build/bin/llama-cli /

ENTRYPOINT [ "/llama-cli" ]
2 changes: 1 addition & 1 deletion .devops/llama-cli-intel.Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
ARG ONEAPI_VERSION=2024.1.1-devel-ubuntu22.04
ARG ONEAPI_VERSION=2025.0.0-0-devel-ubuntu22.04

FROM intel/oneapi-basekit:$ONEAPI_VERSION AS build

Expand Down
7 changes: 4 additions & 3 deletions .devops/llama-cli-musa.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,16 @@ WORKDIR /app
COPY . .

RUN cmake -B build -DGGML_MUSA=ON ${CMAKE_ARGS} -DCMAKE_EXE_LINKER_FLAGS=-Wl,--allow-shlib-undefined . && \
cmake --build build --config Release --target llama-cli -j$(nproc)
cmake --build build --config Release --target llama-cli -j$(nproc) && \
mkdir -p /app/lib && \
find build -name "*.so" -exec cp {} /app/lib \;

FROM ${BASE_MUSA_RUN_CONTAINER} AS runtime

RUN apt-get update && \
apt-get install -y libgomp1

COPY --from=build /app/build/ggml/src/libggml.so /libggml.so
COPY --from=build /app/build/src/libllama.so /libllama.so
COPY --from=build /app/lib/ /
COPY --from=build /app/build/bin/llama-cli /llama-cli

ENTRYPOINT [ "/llama-cli" ]
7 changes: 4 additions & 3 deletions .devops/llama-server-cuda.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -23,15 +23,16 @@ RUN if [ "${CUDA_DOCKER_ARCH}" != "default" ]; then \
export CMAKE_ARGS="-DCMAKE_CUDA_ARCHITECTURES=${CUDA_DOCKER_ARCH}"; \
fi && \
cmake -B build -DGGML_CUDA=ON -DLLAMA_CURL=ON ${CMAKE_ARGS} -DCMAKE_EXE_LINKER_FLAGS=-Wl,--allow-shlib-undefined . && \
cmake --build build --config Release --target llama-server -j$(nproc)
cmake --build build --config Release --target llama-server -j$(nproc) && \
mkdir -p /app/lib && \
find build -name "*.so" -exec cp {} /app/lib \;

FROM ${BASE_CUDA_RUN_CONTAINER} AS runtime

RUN apt-get update && \
apt-get install -y libcurl4-openssl-dev libgomp1 curl

COPY --from=build /app/build/ggml/src/libggml.so /libggml.so
COPY --from=build /app/build/src/libllama.so /libllama.so
COPY --from=build /app/lib/ /
COPY --from=build /app/build/bin/llama-server /llama-server

# Must be set to 0.0.0.0 so it can listen to requests from host machine
Expand Down
2 changes: 1 addition & 1 deletion .devops/llama-server-intel.Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
ARG ONEAPI_VERSION=2024.1.1-devel-ubuntu22.04
ARG ONEAPI_VERSION=2025.0.0-0-devel-ubuntu22.04

FROM intel/oneapi-basekit:$ONEAPI_VERSION AS build

Expand Down
7 changes: 4 additions & 3 deletions .devops/llama-server-musa.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,16 @@ WORKDIR /app
COPY . .

RUN cmake -B build -DGGML_MUSA=ON -DLLAMA_CURL=ON ${CMAKE_ARGS} -DCMAKE_EXE_LINKER_FLAGS=-Wl,--allow-shlib-undefined . && \
cmake --build build --config Release --target llama-server -j$(nproc)
cmake --build build --config Release --target llama-server -j$(nproc) && \
mkdir -p /app/lib && \
find build -name "*.so" -exec cp {} /app/lib \;

FROM ${BASE_MUSA_RUN_CONTAINER} AS runtime

RUN apt-get update && \
apt-get install -y libcurl4-openssl-dev libgomp1 curl

COPY --from=build /app/build/ggml/src/libggml.so /libggml.so
COPY --from=build /app/build/src/libllama.so /libllama.so
COPY --from=build /app/lib/ /
COPY --from=build /app/build/bin/llama-server /llama-server

# Must be set to 0.0.0.0 so it can listen to requests from host machine
Expand Down
6 changes: 3 additions & 3 deletions .devops/nix/package.nix
Original file line number Diff line number Diff line change
Expand Up @@ -126,9 +126,9 @@ effectiveStdenv.mkDerivation (finalAttrs: {
};

postPatch = ''
substituteInPlace ./ggml/src/ggml-metal.m \
substituteInPlace ./ggml/src/ggml-metal/ggml-metal.m \
--replace '[bundle pathForResource:@"ggml-metal" ofType:@"metal"];' "@\"$out/bin/ggml-metal.metal\";"
substituteInPlace ./ggml/src/ggml-metal.m \
substituteInPlace ./ggml/src/ggml-metal/ggml-metal.m \
--replace '[bundle pathForResource:@"default" ofType:@"metallib"];' "@\"$out/bin/default.metallib\";"
'';

Expand Down Expand Up @@ -173,7 +173,7 @@ effectiveStdenv.mkDerivation (finalAttrs: {
(cmakeBool "GGML_NATIVE" false)
(cmakeBool "GGML_BLAS" useBlas)
(cmakeBool "GGML_CUDA" useCuda)
(cmakeBool "GGML_HIPBLAS" useRocm)
(cmakeBool "GGML_HIP" useRocm)
(cmakeBool "GGML_METAL" useMetalKit)
(cmakeBool "GGML_VULKAN" useVulkan)
(cmakeBool "GGML_STATIC" enableStatic)
Expand Down
10 changes: 10 additions & 0 deletions .editorconfig
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,16 @@ insert_final_newline = unset
[examples/server/public/*]
indent_size = 2

[examples/server/public/deps_*]
trim_trailing_whitespace = unset
indent_style = unset
indent_size = unset

[examples/server/deps_*]
trim_trailing_whitespace = unset
indent_style = unset
indent_size = unset

[examples/llama.swiftui/llama.swiftui.xcodeproj/*]
indent_style = tab

Expand Down
50 changes: 42 additions & 8 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,13 @@ jobs:
sysctl -a
mkdir build
cd build
cmake -DLLAMA_FATAL_WARNINGS=ON -DGGML_METAL_EMBED_LIBRARY=ON -DLLAMA_CURL=ON -DGGML_RPC=ON -DBUILD_SHARED_LIBS=OFF ..
cmake .. \
-DLLAMA_FATAL_WARNINGS=ON \
-DLLAMA_CURL=ON \
-DGGML_METAL_USE_BF16=ON \
-DGGML_METAL_EMBED_LIBRARY=ON \
-DGGML_RPC=ON \
-DBUILD_SHARED_LIBS=OFF
cmake --build . --config Release -j $(sysctl -n hw.logicalcpu)

- name: Test
Expand Down Expand Up @@ -113,7 +119,12 @@ jobs:
sysctl -a
# Metal is disabled due to intermittent failures with Github runners not having a GPU:
# https://github.com/ggerganov/llama.cpp/actions/runs/8635935781/job/23674807267#step:5:2313
cmake -B build -DLLAMA_FATAL_WARNINGS=ON -DGGML_METAL=OFF -DLLAMA_CURL=ON -DGGML_RPC=ON -DBUILD_SHARED_LIBS=OFF
cmake -B build \
-DLLAMA_FATAL_WARNINGS=ON \
-DLLAMA_CURL=ON \
-DGGML_METAL=OFF \
-DGGML_RPC=ON \
-DBUILD_SHARED_LIBS=OFF
cmake --build build --config Release -j $(sysctl -n hw.logicalcpu)

- name: Test
Expand Down Expand Up @@ -394,15 +405,36 @@ jobs:
- name: Build with native CMake HIP support
id: cmake_build
run: |
cmake -B build -S . -DCMAKE_HIP_COMPILER="$(hipconfig -l)/clang" -DGGML_HIPBLAS=ON
cmake -B build -S . -DCMAKE_HIP_COMPILER="$(hipconfig -l)/clang" -DGGML_HIP=ON
cmake --build build --config Release -j $(nproc)

- name: Build with legacy HIP support
id: cmake_build_legacy_hip
run: |
cmake -B build2 -S . -DCMAKE_C_COMPILER=hipcc -DCMAKE_CXX_COMPILER=hipcc -DGGML_HIPBLAS=ON
cmake -B build2 -S . -DCMAKE_C_COMPILER=hipcc -DCMAKE_CXX_COMPILER=hipcc -DGGML_HIP=ON
cmake --build build2 --config Release -j $(nproc)

ubuntu-22-cmake-musa:
runs-on: ubuntu-22.04
container: mthreads/musa:rc3.1.0-devel-ubuntu22.04

steps:
- name: Clone
id: checkout
uses: actions/checkout@v4

- name: Dependencies
id: depends
run: |
apt-get update
apt-get install -y build-essential git cmake libcurl4-openssl-dev

- name: Build with native CMake MUSA support
id: cmake_build
run: |
cmake -B build -S . -DGGML_MUSA=ON
cmake --build build --config Release -j $(nproc)

ubuntu-22-cmake-sycl:
runs-on: ubuntu-22.04

Expand Down Expand Up @@ -569,6 +601,7 @@ jobs:
mkdir build
cd build
cmake -G Xcode .. \
-DGGML_METAL_USE_BF16=ON \
-DGGML_METAL_EMBED_LIBRARY=ON \
-DLLAMA_BUILD_EXAMPLES=OFF \
-DLLAMA_BUILD_TESTS=OFF \
Expand Down Expand Up @@ -599,6 +632,7 @@ jobs:
mkdir build
cd build
cmake -G Xcode .. \
-DGGML_METAL_USE_BF16=ON \
-DGGML_METAL_EMBED_LIBRARY=ON \
-DLLAMA_BUILD_EXAMPLES=OFF \
-DLLAMA_BUILD_TESTS=OFF \
Expand Down Expand Up @@ -734,7 +768,7 @@ jobs:
id: clone_kompute
if: ${{ matrix.build == 'kompute-x64' }}
run: |
git submodule update --init ggml/src/kompute
git submodule update --init ggml/src/ggml-kompute/kompute

- name: Download OpenBLAS
id: get_openblas
Expand Down Expand Up @@ -917,7 +951,7 @@ jobs:
shell: bash

env:
WINDOWS_BASEKIT_URL: https://registrationcenter-download.intel.com/akdlm/IRC_NAS/7dff44ba-e3af-4448-841c-0d616c8da6e7/w_BaseKit_p_2024.1.0.595_offline.exe
WINDOWS_BASEKIT_URL: https://registrationcenter-download.intel.com/akdlm/IRC_NAS/b380d914-366b-4b77-a74a-05e3c38b3514/intel-oneapi-base-toolkit-2025.0.0.882_offline.exe
WINDOWS_DPCPP_MKL: intel.oneapi.win.cpp-dpcpp-common:intel.oneapi.win.mkl.devel
ONEAPI_ROOT: "C:/Program Files (x86)/Intel/oneAPI"
steps:
Expand Down Expand Up @@ -1001,7 +1035,7 @@ jobs:
run: |
$env:HIP_PATH=$(Resolve-Path 'C:\Program Files\AMD\ROCm\*\bin\clang.exe' | split-path | split-path)
$env:CMAKE_PREFIX_PATH="${env:HIP_PATH}"
cmake -G "Unix Makefiles" -B build -S . -DCMAKE_C_COMPILER="${env:HIP_PATH}\bin\clang.exe" -DCMAKE_CXX_COMPILER="${env:HIP_PATH}\bin\clang++.exe" -DGGML_HIPBLAS=ON -DCMAKE_BUILD_TYPE=Release -DGGML_RPC=ON
cmake -G "Unix Makefiles" -B build -S . -DCMAKE_C_COMPILER="${env:HIP_PATH}\bin\clang.exe" -DCMAKE_CXX_COMPILER="${env:HIP_PATH}\bin\clang++.exe" -DGGML_HIP=ON -DCMAKE_BUILD_TYPE=Release -DGGML_RPC=ON
cmake --build build -j ${env:NUMBER_OF_PROCESSORS}

windows-latest-cmake-hip-release:
Expand Down Expand Up @@ -1037,7 +1071,7 @@ jobs:
run: |
$env:HIP_PATH=$(Resolve-Path 'C:\Program Files\AMD\ROCm\*\bin\clang.exe' | split-path | split-path)
$env:CMAKE_PREFIX_PATH="${env:HIP_PATH}"
cmake -G "Unix Makefiles" -B build -S . -DCMAKE_C_COMPILER="${env:HIP_PATH}\bin\clang.exe" -DCMAKE_CXX_COMPILER="${env:HIP_PATH}\bin\clang++.exe" -DGGML_HIPBLAS=ON -DCMAKE_BUILD_TYPE=Release -DAMDGPU_TARGETS=${{ matrix.gpu_target }} -DGGML_RPC=ON
cmake -G "Unix Makefiles" -B build -S . -DCMAKE_C_COMPILER="${env:HIP_PATH}\bin\clang.exe" -DCMAKE_CXX_COMPILER="${env:HIP_PATH}\bin\clang++.exe" -DGGML_HIP=ON -DCMAKE_BUILD_TYPE=Release -DAMDGPU_TARGETS=${{ matrix.gpu_target }} -DGGML_RPC=ON
cmake --build build -j ${env:NUMBER_OF_PROCESSORS}
md "build\bin\rocblas\library\"
cp "${env:HIP_PATH}\bin\hipblas.dll" "build\bin\"
Expand Down
2 changes: 1 addition & 1 deletion .gitmodules
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
[submodule "kompute"]
path = ggml/src/kompute
path = ggml/src/ggml-kompute/kompute
url = https://github.com/nomic-ai/kompute.git
1 change: 0 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,6 @@ set(LLAMA_INCLUDE_INSTALL_DIR ${CMAKE_INSTALL_INCLUDEDIR} CACHE PATH "Location o
set(LLAMA_LIB_INSTALL_DIR ${CMAKE_INSTALL_LIBDIR} CACHE PATH "Location of library files")
set(LLAMA_BIN_INSTALL_DIR ${CMAKE_INSTALL_BINDIR} CACHE PATH "Location of binary files")


# At the moment some compile definitions are placed within the ggml/src
# directory but not exported on the `ggml` target. This could be improved by
# determining _precisely_ which defines are necessary for the llama-config
Expand Down
34 changes: 19 additions & 15 deletions CMakePresets.json
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,12 @@
"CMAKE_INSTALL_RPATH": "$ORIGIN;$ORIGIN/.."
}
},
{ "name": "debug", "hidden": true, "cacheVariables": { "CMAKE_BUILD_TYPE": "Debug" } },
{ "name": "release", "hidden": true, "cacheVariables": { "CMAKE_BUILD_TYPE": "Release" } },
{ "name": "reldbg", "hidden": true, "cacheVariables": { "CMAKE_BUILD_TYPE": "RelWithDebInfo" } },
{ "name": "static", "hidden": true, "cacheVariables": { "GGML_STATIC": "ON" } },
{ "name": "sycl_f16", "hidden": true, "cacheVariables": { "GGML_SYCL_F16": "ON" } },
{ "name": "debug", "hidden": true, "cacheVariables": { "CMAKE_BUILD_TYPE": "Debug" } },
{ "name": "release", "hidden": true, "cacheVariables": { "CMAKE_BUILD_TYPE": "Release" } },
{ "name": "reldbg", "hidden": true, "cacheVariables": { "CMAKE_BUILD_TYPE": "RelWithDebInfo" } },
{ "name": "static", "hidden": true, "cacheVariables": { "GGML_STATIC": "ON" } },
{ "name": "sycl_f16", "hidden": true, "cacheVariables": { "GGML_SYCL_F16": "ON" } },
{ "name": "vulkan", "hidden": true, "cacheVariables": { "GGML_VULKAN": "ON" } },

{
"name": "arm64-windows-msvc", "hidden": true,
Expand Down Expand Up @@ -57,25 +58,28 @@
}
},

{ "name": "arm64-windows-llvm-debug" , "inherits": [ "base", "arm64-windows-llvm", "debug" ] },
{ "name": "arm64-windows-llvm-release", "inherits": [ "base", "arm64-windows-llvm", "reldbg" ] },
{ "name": "arm64-windows-llvm+static-release", "inherits": [ "base", "arm64-windows-llvm", "reldbg", "static" ] },
{ "name": "arm64-windows-llvm-debug", "inherits": [ "base", "arm64-windows-llvm", "debug" ] },
{ "name": "arm64-windows-llvm-release", "inherits": [ "base", "arm64-windows-llvm", "reldbg" ] },
{ "name": "arm64-windows-llvm+static-release", "inherits": [ "base", "arm64-windows-llvm", "reldbg", "static" ] },

{ "name": "arm64-apple-clang-debug" , "inherits": [ "base", "arm64-apple-clang", "debug" ] },
{ "name": "arm64-apple-clang-release" , "inherits": [ "base", "arm64-apple-clang", "reldbg" ] },
{ "name": "arm64-apple-clang+static-release" , "inherits": [ "base", "arm64-apple-clang", "reldbg", "static" ] },
{ "name": "arm64-apple-clang-debug", "inherits": [ "base", "arm64-apple-clang", "debug" ] },
{ "name": "arm64-apple-clang-release", "inherits": [ "base", "arm64-apple-clang", "reldbg" ] },
{ "name": "arm64-apple-clang+static-release", "inherits": [ "base", "arm64-apple-clang", "reldbg", "static" ] },

{ "name": "arm64-windows-msvc-debug" , "inherits": [ "base", "arm64-windows-msvc", "debug" ] },
{ "name": "arm64-windows-msvc-debug", "inherits": [ "base", "arm64-windows-msvc", "debug" ] },
{ "name": "arm64-windows-msvc-release", "inherits": [ "base", "arm64-windows-msvc", "reldbg" ] },
{ "name": "arm64-windows-msvc+static-release", "inherits": [ "base", "arm64-windows-msvc", "reldbg", "static" ] },

{ "name": "x64-windows-msvc-debug" , "inherits": [ "base", "debug" ] },
{ "name": "x64-windows-msvc-debug", "inherits": [ "base", "debug" ] },
{ "name": "x64-windows-msvc-release", "inherits": [ "base", "reldbg" ] },
{ "name": "x64-windows-msvc+static-release", "inherits": [ "base", "reldbg", "static" ] },

{ "name": "x64-windows-sycl-debug" , "inherits": [ "sycl-base", "debug" ] },
{ "name": "x64-windows-sycl-debug", "inherits": [ "sycl-base", "debug" ] },
{ "name": "x64-windows-sycl-debug-f16", "inherits": [ "sycl-base", "debug", "sycl_f16" ] },
{ "name": "x64-windows-sycl-release", "inherits": [ "sycl-base", "release" ] },
{ "name": "x64-windows-sycl-release-f16", "inherits": [ "sycl-base", "release", "sycl_f16" ] }
{ "name": "x64-windows-sycl-release-f16", "inherits": [ "sycl-base", "release", "sycl_f16" ] },

{ "name": "x64-windows-vulkan-debug", "inherits": [ "base", "vulkan", "debug" ] },
{ "name": "x64-windows-vulkan-release", "inherits": [ "base", "vulkan", "release" ] }
]
}
Loading
Loading