Skip to content

When loading the model I get the following error: #17

Open
@JeisonJimenezA

Description

@JeisonJimenezA

llm_load_tensors: ggml ctx size = 0.16 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 9363.40 MB
llm_load_tensors: offloading 6 repeating layers to GPU
llm_load_tensors: offloaded 6/43 layers to GPU
llm_load_tensors: VRAM used: 1637.37 MB
.................................................................................GGML_ASSERT: D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\ggml-cuda.cu:5925: false

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions