Skip to main content

NLP Collective

A collective focused on NLP (natural language processing), the transformation or extraction of useful information from natural language data.
38.6k Questions
+16
9.9k Members
+55
Contact

Pinned content

View all 2 collections

NLP admins have deemed these posts noteworthy.

Pinned
9 votes
1k views
Collection

Natural Language Processing FAQ

Frequently asked questions relating to NLP. Many of these may be questions that are often asked over and over, duplicates would likely be closed in favor of these. Add the best answer (using the ...
Berthold's user avatar
  • 101

Can you answer these questions?

View all unanswered questions

These questions still don't have an answer

0 votes
0 answers
5 views

Asynchronous multi-client Hugging Face inference server without blocking GPU utilization

I'm building a local inference server that handles multiple user requests concurrently on a single GPU. Each user sends a prompt to a Hugging Face model (e.g., Llama-2, Mistral, Falcon). However, I ...
0 votes
0 answers
32 views

PyTorch with Docker issues: torch.cuda.is_available() = False

I'm having an issue with PyTorch in a Docker container where returns False, but the same PyTorch version works correctly outside the container. torch.cuda.is_available() Environment Host: Debian 12 ...
0 votes
0 answers
28 views

Unable to display emotional analysis in Shiny.io app

I am using below ggplot code to display the graph of emotional analysis in shiny.io app. ggplot(emotion_counts, aes(word, n)) + geom_col(aes(fill = sentiment)) + facet_wrap(~sentiment, scales = &...
0 votes
0 answers
11 views

Error when running apply_chat_template: chatGLM4Tokenizer does not have padding_side

I am trying to run the simple example given in the nikravan/glm-4vq hugging face page which is import torch from transformers import AutoModelForCausalLM, AutoTokenizer from PIL import Image device = ...
-3 votes
0 answers
45 views

Use LM Studio LLM model as embedding model with LangChain

In LM Studio, I have run a OpenAI server exposing a Gemma 3 model: Now consider the following code: from langchain_openai import OpenAIEmbeddings # Load embeddings embeddings = OpenAIEmbeddings( ...