Skip to main content

Questions tagged [gpu]

Graphics Processing Units (GPUs) within the context of Machine Learning often refer to the hardware requirements, design considerations, or level of parallelization for implementing and running various machine learning algorithms.

Filter by
Sorted by
Tagged with
5 votes
1 answer
72 views

I'm trying to fully pass through my GPU to a hyper v VM. However, all guides and tutorials only partition it, resulting in the GPU not appearing as a GPU in the VM's task manger performance tab. My ...
Magdalena's user avatar
2 votes
1 answer
135 views

I'm currently working on a Parallel and Distributed Computing project where I'm comparing the performance of both XGBoost and CatBoost when trained on CPU vs GPU. The goal is to demonstrate how GPU ...
Mxneeb's user avatar
  • 21
6 votes
0 answers
65 views

I am trying to setup a VM on GCP, but every time I try to create an instance in Compute Engine, there is an error message saying that the configuration that I asked is not currently available in the ...
Erwan's user avatar
  • 27.1k
0 votes
0 answers
18 views

I am working on a typical classification task using the MNIST dataset and training with PyTorch Lightning and DDP. I am encountering an issue where the row sums in the confusion matrix are not ...
FrancisVan's user avatar
1 vote
0 answers
79 views

I am facing issues with getting a free port in the DDP setup block of PyTorch for parallelizing my deep learning training job across multiple GPUs on a Linux HPC cluster. I am trying to submit a deep ...
Shataneek Banerjee's user avatar
1 vote
0 answers
107 views

I'm working with a large language model (LLM) that requires a large context window of 60,000 to 70,000 tokens for my application. My setup includes five GPUs, with three 16GB GPUs and two 8GB GPUs. I'...
Bhalala Gaurav's user avatar
1 vote
0 answers
140 views

When I convert an Efficient net v2 m model from Pytorch to Onnx on differently sized inputs, I notice a strange and unexplained behavior. I was hoping to find an explanation to my observations from ...
Nitish Agarwal's user avatar
2 votes
1 answer
2k views

I’m an engineering grad student, and I’ve been tasked with finding parts for building a shared workstation for my lab. Our work includes deep learning, computer vision, network analysis, reinforcement ...
yuki's user avatar
  • 21
1 vote
0 answers
132 views

How to estimate GPU requirements for model Inference vs model training/fine tuning. If it's differ, then in what ratio? just as a rule of thumb
Akhil Surapuram's user avatar
0 votes
0 answers
69 views

I have a simple UNet model (~1M params) written in Keras 3.0.1, running with a torch backend. My CUDA version is ...
Savindi's user avatar
  • 101
0 votes
1 answer
1k views

I ask this since I could not fix it with the help of: Stack Overflow RuntimeError: module must have its parameters and buffers on device cuda:1 (device_ids[0]) but found one of them on device: cuda:2 ...
questionto42's user avatar
2 votes
1 answer
209 views

Strange mapping: example In the following example, the first column is chosen in the code, second column is the one that does the work instead: 0:0 1234 MiB 1:2 1234 MiB 2:7 1234 MiB 3:5 2341 MiB 4:1 ...
questionto42's user avatar
1 vote
1 answer
425 views

If you hold (mini) batch size constant (as well as everything else) but increase the number of examples (and therefore the number of training iterations), should you expect a (significant) increase in ...
ubadub's user avatar
  • 111
0 votes
1 answer
258 views

My laptop has NVIDIA GeForce GTX1650 GPU. I want to utilize this GPU to run my Python script. Any help in the form of code would be really helpful. I mean tried researching this so much but I couldn't ...
Escanor6's user avatar
1 vote
0 answers
201 views

I have been using libSVM in python notebook to classify my dataset and it takes approximately 5 hours for one run and for 5 fold cross validation, it will take almost a day+ time. I am planning to ...
khushi's user avatar
  • 111

15 30 50 per page
1
2 3 4 5
12