Pakket: llama.cpp (8064+dfsg-1 en anderen)
Verwijzigingen voor llama.cpp
Debian bronnen:
Het bronpakket llama.cpp downloaden:
Beheerders:
Externe bronnen:
- Homepage [github.com]
Vergelijkbare pakketten:
LLM inference in C/C++ - metapackage
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.
* Plain C/C++ implementation without any dependencies * Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks * AVX, AVX2, AVX512 and AMX support for x86 architectures * 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use * Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads MTT GPUs via MUSA) * Vulkan and SYCL backend support * CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
The compute functionality is provided by ggml. By default, ggml's CPU backend is installed, but there are many other backends for CPUs and GPUs.
This is a meta-package that either depends on all of the relevant binary packages.
Andere aan llama.cpp gerelateerde pakketten
|
|
|
|
-
- dep: libc6 (>= 2.38) [sparc64]
- GNU C Bibliotheek: Gedeelde bibliotheken
Ook een virtueel pakket geboden door: libc6-udeb
-
- dep: libc6.1 (>= 2.38) [alpha]
- GNU C Bibliotheek: Gedeelde bibliotheken
Ook een virtueel pakket geboden door: libc6.1-udeb
-
- dep: libcurl4t64 (>= 7.16.2) [niet all]
- easy-to-use client-side URL transfer library (OpenSSL flavour)
-
- dep: libgcc-s1 (>= 3.4) [alpha]
- GCC support bibliotheek
- dep: libgcc-s1 (>= 4.3) [sparc64]
-
- dep: libggml-cpu (<< 0.0~git20250713) [niet all]
- Tensor library for machine learning - CPU backend
- of libggml-backend (<< 0.0~git20250713)
- Pakket niet beschikbaar
- dep: libggml-cpu (>= 0.0~git20250712) [niet all]
- of libggml-backend (>= 0.0~git20250712)
-
- dep: libstdc++6 (>= 14) [niet all]
- GNU Standard C++ Library v3
-
- dep: llama.cpp-tools [all]
- LLM inference in C/C++ - main utilities
-
- dep: python3 [niet all]
- interactive high-level object-oriented language (default python3 version)
-
- rec: llama.cpp-tools-extra
- LLM inference in C/C++ - extra utilities
-
- rec: python3-gguf
- Python library for working with GGUF files
-
- sug: llama.cpp-examples
- LLM inference in C/C++ - example programs
llama.cpp downloaden
| Platform | Versie | Pakketgrootte | Geïnstalleerde grootte | Bestanden |
|---|---|---|---|---|
| all | 8064+dfsg-1 | 7,9 kB | 22,0 kB | [overzicht] |
| alpha (unofficial port) | 5882+dfsg-2 | 7.704,8 kB | 67.836,0 kB | [overzicht] |
| sparc64 (unofficial port) | 5882+dfsg-2 | 6.809,5 kB | 101.618,0 kB | [overzicht] |
